Test Report: KVM_Linux_crio 22101

                    
                      e65f928d8ebd0537e3fd5f2753f43f3d5796d0a1:2025-12-12:42734
                    
                

Test fail (13/431)

x
+
TestAddons/parallel/Registry (363.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 11.61147ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:338: TestAddons/parallel/Registry: WARNING: pod list for "kube-system" "actual-registry=true" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:386: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-081397 -n addons-081397
addons_test.go:386: TestAddons/parallel/Registry: showing logs for failed pods as of 2025-12-12 00:09:37.883162714 +0000 UTC m=+842.592795700
addons_test.go:386: (dbg) Run:  kubectl --context addons-081397 describe po registry-6b586f9694-f9q5b -n kube-system
addons_test.go:386: (dbg) kubectl --context addons-081397 describe po registry-6b586f9694-f9q5b -n kube-system:
Name:             registry-6b586f9694-f9q5b
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             addons-081397/192.168.39.2
Start Time:       Thu, 11 Dec 2025 23:56:47 +0000
Labels:           actual-registry=true
addonmanager.kubernetes.io/mode=Reconcile
kubernetes.io/minikube-addons=registry
pod-template-hash=6b586f9694
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/registry-6b586f9694
Containers:
registry:
Container ID:   
Image:          docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e
Image ID:       
Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:
REGISTRY_STORAGE_DELETE_ENABLED:  true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hmk7g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hmk7g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                           Age                    From               Message
----     ------                           ----                   ----               -------
Normal   Scheduled                        12m                    default-scheduler  Successfully assigned kube-system/registry-6b586f9694-f9q5b to addons-081397
Warning  Failed                           11m                    kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": copying system image from manifest list: determining manifest MIME type for docker://registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           8m25s (x2 over 9m59s)  kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           7m1s (x4 over 11m)     kubelet            Error: ErrImagePull
Warning  Failed                           7m1s                   kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           6m37s (x7 over 11m)    kubelet            Error: ImagePullBackOff
Normal   Pulling                          5m40s (x5 over 12m)    kubelet            Pulling image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
Warning  FailedToRetrieveImagePullSecret  2m47s (x26 over 12m)   kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Normal   BackOff                          2m32s (x22 over 11m)   kubelet            Back-off pulling image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
addons_test.go:386: (dbg) Run:  kubectl --context addons-081397 logs registry-6b586f9694-f9q5b -n kube-system
addons_test.go:386: (dbg) Non-zero exit: kubectl --context addons-081397 logs registry-6b586f9694-f9q5b -n kube-system: exit status 1 (90.724059ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "registry" in pod "registry-6b586f9694-f9q5b" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:386: kubectl --context addons-081397 logs registry-6b586f9694-f9q5b -n kube-system: exit status 1
addons_test.go:387: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-081397 -n addons-081397
helpers_test.go:253: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 logs -n 25: (1.513027708s)
helpers_test.go:261: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-449217                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-859495                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ --download-only -p binary-mirror-928519 --alsologtostderr --binary-mirror http://127.0.0.1:46143 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p binary-mirror-928519                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ addons  │ enable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ start   │ -p addons-081397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:02 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ enable headlamp -p addons-081397 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                         │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ ssh     │ addons-081397 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │                     │
	│ addons  │ addons-081397 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ip      │ addons-081397 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:07 UTC │
	│ addons  │ addons-081397 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:51.508824  191080 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:51.508961  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.508968  191080 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:51.508973  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.509212  191080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1211 23:55:51.509810  191080 out.go:368] Setting JSON to false
	I1211 23:55:51.510832  191080 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20296,"bootTime":1765477056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:51.510906  191080 start.go:143] virtualization: kvm guest
	I1211 23:55:51.512916  191080 out.go:179] * [addons-081397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:51.514286  191080 notify.go:221] Checking for updates...
	I1211 23:55:51.514305  191080 out.go:179]   - MINIKUBE_LOCATION=22101
	I1211 23:55:51.515624  191080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:51.517281  191080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:51.518706  191080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.520288  191080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:55:51.521862  191080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:55:51.523574  191080 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:51.556952  191080 out.go:179] * Using the kvm2 driver based on user configuration
	I1211 23:55:51.558571  191080 start.go:309] selected driver: kvm2
	I1211 23:55:51.558600  191080 start.go:927] validating driver "kvm2" against <nil>
	I1211 23:55:51.558629  191080 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:55:51.559389  191080 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:51.559736  191080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:55:51.559767  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:55:51.559823  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:55:51.559835  191080 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:51.559888  191080 start.go:353] cluster config:
	{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1211 23:55:51.560015  191080 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:51.561727  191080 out.go:179] * Starting "addons-081397" primary control-plane node in "addons-081397" cluster
	I1211 23:55:51.563063  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:51.563108  191080 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:51.563116  191080 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:51.563256  191080 preload.go:238] Found /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:55:51.563274  191080 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 23:55:51.563705  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:55:51.563732  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json: {Name:mk3f56184a595aa65236de2721f264b9d77bbfd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:55:51.563928  191080 start.go:360] acquireMachinesLock for addons-081397: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:55:51.564001  191080 start.go:364] duration metric: took 52.499µs to acquireMachinesLock for "addons-081397"
	I1211 23:55:51.564027  191080 start.go:93] Provisioning new machine with config: &{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:55:51.564111  191080 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:55:51.566772  191080 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1211 23:55:51.567024  191080 start.go:159] libmachine.API.Create for "addons-081397" (driver="kvm2")
	I1211 23:55:51.567078  191080 client.go:173] LocalClient.Create starting
	I1211 23:55:51.567214  191080 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem
	I1211 23:55:51.634646  191080 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem
	I1211 23:55:51.761850  191080 main.go:143] libmachine: creating domain...
	I1211 23:55:51.761879  191080 main.go:143] libmachine: creating network...
	I1211 23:55:51.763511  191080 main.go:143] libmachine: found existing default network
	I1211 23:55:51.763716  191080 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.764419  191080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dae890}
	I1211 23:55:51.764553  191080 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-081397</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.771343  191080 main.go:143] libmachine: creating private network mk-addons-081397 192.168.39.0/24...
	I1211 23:55:51.876571  191080 main.go:143] libmachine: private network mk-addons-081397 192.168.39.0/24 created
	I1211 23:55:51.876999  191080 main.go:143] libmachine: <network>
	  <name>mk-addons-081397</name>
	  <uuid>f81ed5cb-0804-4477-9781-0372afa282e4</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:59:29:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.877044  191080 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:51.877068  191080 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1211 23:55:51.877078  191080 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.877153  191080 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22101-186349/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1211 23:55:52.159080  191080 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa...
	I1211 23:55:52.239938  191080 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk...
	I1211 23:55:52.239993  191080 main.go:143] libmachine: Writing magic tar header
	I1211 23:55:52.240026  191080 main.go:143] libmachine: Writing SSH key tar header
	I1211 23:55:52.240106  191080 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:52.240169  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397
	I1211 23:55:52.240206  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 (perms=drwx------)
	I1211 23:55:52.240215  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines
	I1211 23:55:52.240224  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:55:52.240232  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:52.240240  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube (perms=drwxr-xr-x)
	I1211 23:55:52.240250  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349
	I1211 23:55:52.240258  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349 (perms=drwxrwxr-x)
	I1211 23:55:52.240268  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:55:52.240275  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:55:52.240283  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1211 23:55:52.240291  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:55:52.240299  191080 main.go:143] libmachine: checking permissions on dir: /home
	I1211 23:55:52.240306  191080 main.go:143] libmachine: skipping /home - not owner
	I1211 23:55:52.240309  191080 main.go:143] libmachine: defining domain...
	I1211 23:55:52.242720  191080 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:52.249320  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:07:bd:c2 in network default
	I1211 23:55:52.250641  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:52.250680  191080 main.go:143] libmachine: starting domain...
	I1211 23:55:52.250686  191080 main.go:143] libmachine: ensuring networks are active...
	I1211 23:55:52.252166  191080 main.go:143] libmachine: Ensuring network default is active
	I1211 23:55:52.253166  191080 main.go:143] libmachine: Ensuring network mk-addons-081397 is active
	I1211 23:55:52.254226  191080 main.go:143] libmachine: getting domain XML...
	I1211 23:55:52.255944  191080 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <uuid>132f08c0-43de-4a3f-abcb-9cf58535d902</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2b:32:89'/>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:07:bd:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:53.688550  191080 main.go:143] libmachine: waiting for domain to start...
	I1211 23:55:53.691114  191080 main.go:143] libmachine: domain is now running
	I1211 23:55:53.691144  191080 main.go:143] libmachine: waiting for IP...
	I1211 23:55:53.692424  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.693801  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.693826  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.694334  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.694402  191080 retry.go:31] will retry after 260.574844ms: waiting for domain to come up
	I1211 23:55:53.957397  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.958627  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.958657  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.959170  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.959230  191080 retry.go:31] will retry after 343.725464ms: waiting for domain to come up
	I1211 23:55:54.305232  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.306166  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.306193  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.306730  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.306782  191080 retry.go:31] will retry after 478.083756ms: waiting for domain to come up
	I1211 23:55:54.787051  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.788263  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.788294  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.788968  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.789021  191080 retry.go:31] will retry after 586.83961ms: waiting for domain to come up
	I1211 23:55:55.378616  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:55.379761  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:55.379794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:55.380438  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:55.380514  191080 retry.go:31] will retry after 629.739442ms: waiting for domain to come up
	I1211 23:55:56.011678  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.012771  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.012794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.013869  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.013951  191080 retry.go:31] will retry after 838.290437ms: waiting for domain to come up
	I1211 23:55:56.853752  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.854450  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.854485  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.854918  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.854979  191080 retry.go:31] will retry after 1.020736825s: waiting for domain to come up
	I1211 23:55:57.877350  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:57.878104  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:57.878134  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:57.878522  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:57.878563  191080 retry.go:31] will retry after 1.394206578s: waiting for domain to come up
	I1211 23:55:59.275153  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:59.276377  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:59.276409  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:59.276994  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:59.277049  191080 retry.go:31] will retry after 1.4774988s: waiting for domain to come up
	I1211 23:56:00.757189  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:00.758049  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:00.758071  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:00.758450  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:00.758518  191080 retry.go:31] will retry after 1.704024367s: waiting for domain to come up
	I1211 23:56:02.464578  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:02.465672  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:02.465713  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:02.466390  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:02.466496  191080 retry.go:31] will retry after 2.558039009s: waiting for domain to come up
	I1211 23:56:05.028156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:05.029424  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:05.029476  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:05.030141  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:05.030218  191080 retry.go:31] will retry after 2.713185396s: waiting for domain to come up
	I1211 23:56:07.745837  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:07.746810  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:07.746835  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:07.747308  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:07.747359  191080 retry.go:31] will retry after 3.017005916s: waiting for domain to come up
	I1211 23:56:10.768106  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769156  191080 main.go:143] libmachine: domain addons-081397 has current primary IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769185  191080 main.go:143] libmachine: found domain IP: 192.168.39.2
	I1211 23:56:10.769196  191080 main.go:143] libmachine: reserving static IP address...
	I1211 23:56:10.769843  191080 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-081397", mac: "52:54:00:2b:32:89", ip: "192.168.39.2"} in network mk-addons-081397
	I1211 23:56:11.003302  191080 main.go:143] libmachine: reserved static IP address 192.168.39.2 for domain addons-081397
	I1211 23:56:11.003331  191080 main.go:143] libmachine: waiting for SSH...
	I1211 23:56:11.003337  191080 main.go:143] libmachine: Getting to WaitForSSH function...
	I1211 23:56:11.008569  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009090  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.009115  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009350  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.009619  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.009631  191080 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1211 23:56:11.126360  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.126895  191080 main.go:143] libmachine: domain creation complete
	I1211 23:56:11.129784  191080 machine.go:94] provisionDockerMachine start ...
	I1211 23:56:11.134589  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.135537  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.135574  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.136010  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.136277  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.136290  191080 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 23:56:11.257254  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1211 23:56:11.257302  191080 buildroot.go:166] provisioning hostname "addons-081397"
	I1211 23:56:11.261573  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262389  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.262457  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262926  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.263212  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.263234  191080 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-081397 && echo "addons-081397" | sudo tee /etc/hostname
	I1211 23:56:11.410142  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-081397
	
	I1211 23:56:11.414271  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.414882  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.414917  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.415210  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.415441  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.415482  191080 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-081397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-081397/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-081397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:56:11.555358  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.555395  191080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22101-186349/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-186349/.minikube}
	I1211 23:56:11.555420  191080 buildroot.go:174] setting up certificates
	I1211 23:56:11.555443  191080 provision.go:84] configureAuth start
	I1211 23:56:11.558885  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.559509  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.559565  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.562716  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563314  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.563346  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563750  191080 provision.go:143] copyHostCerts
	I1211 23:56:11.563901  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem (1123 bytes)
	I1211 23:56:11.564087  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem (1675 bytes)
	I1211 23:56:11.564163  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem (1082 bytes)
	I1211 23:56:11.564231  191080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem org=jenkins.addons-081397 san=[127.0.0.1 192.168.39.2 addons-081397 localhost minikube]
	I1211 23:56:11.604096  191080 provision.go:177] copyRemoteCerts
	I1211 23:56:11.604171  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:56:11.607337  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.607977  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.608015  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.608218  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:11.699591  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:56:11.739646  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:56:11.780870  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:56:11.821711  191080 provision.go:87] duration metric: took 266.231617ms to configureAuth
	I1211 23:56:11.821755  191080 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:56:11.822007  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:11.826045  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.826578  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826785  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.827068  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.827088  191080 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:56:12.345303  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:56:12.345334  191080 machine.go:97] duration metric: took 1.2155135s to provisionDockerMachine
	I1211 23:56:12.345348  191080 client.go:176] duration metric: took 20.778259004s to LocalClient.Create
	I1211 23:56:12.345369  191080 start.go:167] duration metric: took 20.77834555s to libmachine.API.Create "addons-081397"
	I1211 23:56:12.345379  191080 start.go:293] postStartSetup for "addons-081397" (driver="kvm2")
	I1211 23:56:12.345393  191080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:56:12.345498  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:56:12.350156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351165  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.351226  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351544  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.444149  191080 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:56:12.450354  191080 info.go:137] Remote host: Buildroot 2025.02
	I1211 23:56:12.450386  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/addons for local assets ...
	I1211 23:56:12.450452  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/files for local assets ...
	I1211 23:56:12.450508  191080 start.go:296] duration metric: took 105.122285ms for postStartSetup
	I1211 23:56:12.489061  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.489811  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.489855  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.490235  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:56:12.490597  191080 start.go:128] duration metric: took 20.9264692s to createHost
	I1211 23:56:12.493999  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494451  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.494490  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494674  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:12.494897  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:12.494909  191080 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:56:12.615405  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765497372.576443288
	
	I1211 23:56:12.615439  191080 fix.go:216] guest clock: 1765497372.576443288
	I1211 23:56:12.615447  191080 fix.go:229] Guest: 2025-12-11 23:56:12.576443288 +0000 UTC Remote: 2025-12-11 23:56:12.490625673 +0000 UTC m=+21.040527790 (delta=85.817615ms)
	I1211 23:56:12.615500  191080 fix.go:200] guest clock delta is within tolerance: 85.817615ms
	I1211 23:56:12.615508  191080 start.go:83] releasing machines lock for "addons-081397", held for 21.051491664s
	I1211 23:56:12.619172  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.619799  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.619831  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.620772  191080 ssh_runner.go:195] Run: cat /version.json
	I1211 23:56:12.620876  191080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:56:12.625375  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.625530  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626036  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626063  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626330  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626345  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.626381  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626618  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.717381  191080 ssh_runner.go:195] Run: systemctl --version
	I1211 23:56:12.749852  191080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:56:13.078529  191080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:56:13.088885  191080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:56:13.089007  191080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:56:13.118717  191080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:56:13.118763  191080 start.go:496] detecting cgroup driver to use...
	I1211 23:56:13.118864  191080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:56:13.148400  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:56:13.169798  191080 docker.go:218] disabling cri-docker service (if available) ...
	I1211 23:56:13.169888  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:56:13.191896  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:56:13.211802  191080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:56:13.376765  191080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:56:13.606305  191080 docker.go:234] disabling docker service ...
	I1211 23:56:13.606403  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:56:13.625180  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:56:13.643232  191080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:56:13.829218  191080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:56:14.000354  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:56:14.021612  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:56:14.050867  191080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 23:56:14.050963  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.068612  191080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:56:14.068701  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.086254  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.104697  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.123074  191080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:56:14.143227  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.161079  191080 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.188908  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.207821  191080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:56:14.223124  191080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:56:14.223216  191080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:56:14.252980  191080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:56:14.270522  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:14.430888  191080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:56:14.564516  191080 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:56:14.564671  191080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:56:14.574658  191080 start.go:564] Will wait 60s for crictl version
	I1211 23:56:14.574811  191080 ssh_runner.go:195] Run: which crictl
	I1211 23:56:14.580945  191080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:56:14.633033  191080 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:56:14.633155  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.669436  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.710252  191080 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1211 23:56:14.715883  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716478  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:14.716519  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716765  191080 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:56:14.724237  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:14.744504  191080 kubeadm.go:884] updating cluster {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:56:14.744646  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:56:14.744696  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:14.782232  191080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1211 23:56:14.782317  191080 ssh_runner.go:195] Run: which lz4
	I1211 23:56:14.788630  191080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:56:14.795116  191080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:56:14.795159  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1211 23:56:16.445424  191080 crio.go:462] duration metric: took 1.656827131s to copy over tarball
	I1211 23:56:16.445532  191080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:56:18.102205  191080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.656625041s)
	I1211 23:56:18.102245  191080 crio.go:469] duration metric: took 1.656768065s to extract the tarball
	I1211 23:56:18.102258  191080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:56:18.141443  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:18.189200  191080 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:18.189229  191080 cache_images.go:86] Images are preloaded, skipping loading
	I1211 23:56:18.189239  191080 kubeadm.go:935] updating node { 192.168.39.2 8443 v1.34.2 crio true true} ...
	I1211 23:56:18.189344  191080 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-081397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:56:18.189436  191080 ssh_runner.go:195] Run: crio config
	I1211 23:56:18.243325  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:18.243368  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:18.243392  191080 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 23:56:18.243429  191080 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-081397 NodeName:addons-081397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:56:18.243664  191080 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-081397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:56:18.243802  191080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 23:56:18.259378  191080 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 23:56:18.259504  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:56:18.274263  191080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1211 23:56:18.301193  191080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:56:18.326928  191080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1211 23:56:18.352300  191080 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:56:18.358187  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:18.378953  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:18.546541  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:18.581301  191080 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397 for IP: 192.168.39.2
	I1211 23:56:18.581326  191080 certs.go:195] generating shared ca certs ...
	I1211 23:56:18.581346  191080 certs.go:227] acquiring lock for ca certs: {Name:mkdc58adfd2cc299a76aeec81ac0d7f7d2a38e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.581537  191080 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key
	I1211 23:56:18.667363  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt ...
	I1211 23:56:18.667401  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt: {Name:mk1b55f33c9202ab57b68cfcba7feed18a5c869b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667594  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key ...
	I1211 23:56:18.667607  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key: {Name:mk31aac21dc0da02b77cc3d7268007e3ddde417b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667688  191080 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key
	I1211 23:56:18.787173  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt ...
	I1211 23:56:18.787207  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt: {Name:mk50e6f78e87c39b691065db3fbc22d4178cbab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787389  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key ...
	I1211 23:56:18.787400  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key: {Name:mk3201307c9797e697c52cf7944b78460ad79885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787484  191080 certs.go:257] generating profile certs ...
	I1211 23:56:18.787545  191080 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key
	I1211 23:56:18.787567  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt with IP's: []
	I1211 23:56:18.836629  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt ...
	I1211 23:56:18.836666  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: {Name:mk4cd9c65ec1631677a6989710916cca92666039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.836848  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key ...
	I1211 23:56:18.836869  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key: {Name:mk158319f878ba2a2974fa05c9c5e81406b1ff04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.837128  191080 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68
	I1211 23:56:18.837174  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I1211 23:56:18.895323  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 ...
	I1211 23:56:18.895360  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68: {Name:mka19cf3aa517a67c9823b9db6a0564ae2c88f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895568  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 ...
	I1211 23:56:18.895582  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68: {Name:mkcb32c8b3892cdbb32375c99cf73efb7e2d2ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895669  191080 certs.go:382] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt
	I1211 23:56:18.895740  191080 certs.go:386] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key
	I1211 23:56:18.895792  191080 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key
	I1211 23:56:18.895810  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt with IP's: []
	I1211 23:56:19.059957  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt ...
	I1211 23:56:19.059996  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt: {Name:mkeece2e2a9106cbaddd7935ae5c93b8b6536c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060202  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key ...
	I1211 23:56:19.060217  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key: {Name:mk7fa3201305a84265a30d592c7bfaa4ea9d3d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060422  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 23:56:19.060478  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:56:19.060506  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:56:19.060532  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem (1675 bytes)
	I1211 23:56:19.061341  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:56:19.104179  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:56:19.148345  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:56:19.191324  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 23:56:19.230603  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:56:19.274335  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:56:19.314103  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:56:19.355420  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:56:19.392791  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:56:19.429841  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:56:19.455328  191080 ssh_runner.go:195] Run: openssl version
	I1211 23:56:19.463919  191080 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.478287  191080 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 23:56:19.494141  191080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501262  191080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501357  191080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.511987  191080 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 23:56:19.527366  191080 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 23:56:19.544629  191080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:56:19.551139  191080 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:56:19.551211  191080 kubeadm.go:401] StartCluster: {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:56:19.551367  191080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:56:19.551501  191080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:56:19.601329  191080 cri.go:89] found id: ""
	I1211 23:56:19.601414  191080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:56:19.615890  191080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:56:19.632616  191080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:56:19.646731  191080 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:56:19.646765  191080 kubeadm.go:158] found existing configuration files:
	
	I1211 23:56:19.646828  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:56:19.660106  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:56:19.660190  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:56:19.676276  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:56:19.690027  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:56:19.690116  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:56:19.705756  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.720625  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:56:19.720715  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.735359  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:56:19.750390  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:56:19.750481  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:56:19.766951  191080 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:56:19.839756  191080 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 23:56:19.839847  191080 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 23:56:19.990602  191080 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:56:19.990863  191080 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:56:19.991043  191080 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:56:20.010193  191080 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:56:20.165972  191080 out.go:252]   - Generating certificates and keys ...
	I1211 23:56:20.166144  191080 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 23:56:20.166252  191080 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 23:56:20.166347  191080 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:56:20.551090  191080 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:56:20.773761  191080 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:56:21.138092  191080 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 23:56:21.423874  191080 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 23:56:21.424042  191080 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:21.781372  191080 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 23:56:21.781631  191080 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:22.783972  191080 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:56:22.973180  191080 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:56:23.396371  191080 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 23:56:23.396644  191080 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:56:23.822810  191080 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:56:24.134647  191080 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:56:24.293087  191080 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:56:24.542047  191080 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:56:24.865144  191080 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:56:24.865682  191080 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:56:24.869746  191080 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:56:24.871219  191080 out.go:252]   - Booting up control plane ...
	I1211 23:56:24.871351  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:56:24.871523  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:56:24.871597  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:56:24.889102  191080 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:56:24.889275  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 23:56:24.898513  191080 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 23:56:24.899113  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:56:24.899188  191080 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 23:56:25.090240  191080 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:56:25.090397  191080 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:56:26.591737  191080 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502403531s
	I1211 23:56:26.595003  191080 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1211 23:56:26.595170  191080 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.2:8443/livez
	I1211 23:56:26.595328  191080 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1211 23:56:26.595488  191080 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1211 23:56:29.712995  191080 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.118803589s
	I1211 23:56:31.068676  191080 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.475444759s
	I1211 23:56:33.595001  191080 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002476016s
	I1211 23:56:33.626020  191080 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:56:33.642768  191080 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:56:33.672411  191080 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:56:33.672732  191080 kubeadm.go:319] [mark-control-plane] Marking the node addons-081397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:56:33.697567  191080 kubeadm.go:319] [bootstrap-token] Using token: fx6xk6.14clsj7mtuippxxx
	I1211 23:56:33.699696  191080 out.go:252]   - Configuring RBAC rules ...
	I1211 23:56:33.699861  191080 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:56:33.705146  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:56:33.724431  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:56:33.735134  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:56:33.742267  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:56:33.751087  191080 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:56:34.005984  191080 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:56:34.545250  191080 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1211 23:56:35.004202  191080 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1211 23:56:35.005119  191080 kubeadm.go:319] 
	I1211 23:56:35.005179  191080 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1211 23:56:35.005184  191080 kubeadm.go:319] 
	I1211 23:56:35.005261  191080 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1211 23:56:35.005268  191080 kubeadm.go:319] 
	I1211 23:56:35.005289  191080 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1211 23:56:35.005347  191080 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:56:35.005431  191080 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:56:35.005483  191080 kubeadm.go:319] 
	I1211 23:56:35.005568  191080 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1211 23:56:35.005579  191080 kubeadm.go:319] 
	I1211 23:56:35.005647  191080 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:56:35.005662  191080 kubeadm.go:319] 
	I1211 23:56:35.005707  191080 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1211 23:56:35.005772  191080 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:56:35.005838  191080 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:56:35.005844  191080 kubeadm.go:319] 
	I1211 23:56:35.005915  191080 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:56:35.005983  191080 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1211 23:56:35.005989  191080 kubeadm.go:319] 
	I1211 23:56:35.006133  191080 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006283  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae \
	I1211 23:56:35.006317  191080 kubeadm.go:319] 	--control-plane 
	I1211 23:56:35.006322  191080 kubeadm.go:319] 
	I1211 23:56:35.006403  191080 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:56:35.006410  191080 kubeadm.go:319] 
	I1211 23:56:35.006504  191080 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006639  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae 
	I1211 23:56:35.009065  191080 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:56:35.009128  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:35.009169  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:35.012077  191080 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 23:56:35.013875  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 23:56:35.030825  191080 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 23:56:35.061826  191080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:56:35.061965  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.061967  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-081397 minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-081397 minikube.k8s.io/primary=true
	I1211 23:56:35.142016  191080 ops.go:34] apiserver oom_adj: -16
	I1211 23:56:35.257509  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.758327  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.257620  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.757733  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.258377  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.758134  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.258440  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.758050  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.258437  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.757704  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.258657  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.495051  191080 kubeadm.go:1114] duration metric: took 5.433189491s to wait for elevateKubeSystemPrivileges
	I1211 23:56:40.495110  191080 kubeadm.go:403] duration metric: took 20.943905559s to StartCluster
	I1211 23:56:40.495141  191080 settings.go:142] acquiring lock: {Name:mkc54bc00cde7f692cc672e67ab0af4ae6a15c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.495326  191080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:56:40.495951  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/kubeconfig: {Name:mkdf9d6588b522077beb3bc03f9eff4a2b248de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.496234  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:56:40.496280  191080 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:56:40.496340  191080 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:56:40.496488  191080 addons.go:70] Setting yakd=true in profile "addons-081397"
	I1211 23:56:40.496513  191080 addons.go:239] Setting addon yakd=true in "addons-081397"
	I1211 23:56:40.496519  191080 addons.go:70] Setting inspektor-gadget=true in profile "addons-081397"
	I1211 23:56:40.496555  191080 addons.go:239] Setting addon inspektor-gadget=true in "addons-081397"
	I1211 23:56:40.496571  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496571  191080 addons.go:70] Setting ingress=true in profile "addons-081397"
	I1211 23:56:40.496589  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.496605  191080 addons.go:239] Setting addon ingress=true in "addons-081397"
	I1211 23:56:40.496607  191080 addons.go:70] Setting metrics-server=true in profile "addons-081397"
	I1211 23:56:40.496619  191080 addons.go:70] Setting ingress-dns=true in profile "addons-081397"
	I1211 23:56:40.496623  191080 addons.go:239] Setting addon metrics-server=true in "addons-081397"
	I1211 23:56:40.496630  191080 addons.go:70] Setting cloud-spanner=true in profile "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting registry-creds=true in profile "addons-081397"
	I1211 23:56:40.496643  191080 addons.go:70] Setting gcp-auth=true in profile "addons-081397"
	I1211 23:56:40.496649  191080 addons.go:239] Setting addon cloud-spanner=true in "addons-081397"
	I1211 23:56:40.496652  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496658  191080 addons.go:239] Setting addon registry-creds=true in "addons-081397"
	I1211 23:56:40.496662  191080 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.496670  191080 mustload.go:66] Loading cluster: addons-081397
	I1211 23:56:40.496674  191080 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-081397"
	I1211 23:56:40.496687  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496694  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496707  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496846  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.497455  191080 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-081397"
	I1211 23:56:40.497568  191080 addons.go:70] Setting registry=true in profile "addons-081397"
	I1211 23:56:40.497576  191080 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:40.497609  191080 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-081397"
	I1211 23:56:40.497628  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497632  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-081397"
	I1211 23:56:40.497653  191080 addons.go:70] Setting volcano=true in profile "addons-081397"
	I1211 23:56:40.497674  191080 addons.go:239] Setting addon volcano=true in "addons-081397"
	I1211 23:56:40.497708  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497837  191080 addons.go:70] Setting volumesnapshots=true in profile "addons-081397"
	I1211 23:56:40.497852  191080 addons.go:239] Setting addon volumesnapshots=true in "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting default-storageclass=true in profile "addons-081397"
	I1211 23:56:40.497876  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497894  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-081397"
	I1211 23:56:40.496631  191080 addons.go:239] Setting addon ingress-dns=true in "addons-081397"
	I1211 23:56:40.498289  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496621  191080 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.498652  191080 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-081397"
	I1211 23:56:40.498685  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499011  191080 addons.go:70] Setting storage-provisioner=true in profile "addons-081397"
	I1211 23:56:40.499034  191080 addons.go:239] Setting addon storage-provisioner=true in "addons-081397"
	I1211 23:56:40.499062  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496606  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497596  191080 addons.go:239] Setting addon registry=true in "addons-081397"
	I1211 23:56:40.496653  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499671  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.500663  191080 out.go:179] * Verifying Kubernetes components...
	I1211 23:56:40.502382  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:40.503922  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.506960  191080 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:56:40.507005  191080 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1211 23:56:40.507060  191080 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1211 23:56:40.506993  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.507197  191080 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-081397"
	I1211 23:56:40.507613  191080 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	W1211 23:56:40.508273  191080 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:56:40.508767  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.508846  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:56:40.508884  191080 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:56:40.508983  191080 addons.go:239] Setting addon default-storageclass=true in "addons-081397"
	I1211 23:56:40.509037  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.509123  191080 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:40.509134  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:56:40.509862  191080 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:40.509879  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1211 23:56:40.510705  191080 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1211 23:56:40.510765  191080 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:40.510708  191080 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:56:40.510709  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:56:40.510780  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:56:40.510963  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:56:40.512352  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1211 23:56:40.512423  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:40.512795  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1211 23:56:40.513366  191080 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1211 23:56:40.513405  191080 out.go:179]   - Using image docker.io/registry:3.0.0
	I1211 23:56:40.513427  191080 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:40.513856  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:56:40.513452  191080 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:56:40.513569  191080 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1211 23:56:40.514419  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:56:40.514823  191080 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:56:40.515501  191080 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:40.515566  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1211 23:56:40.516012  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.516028  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:56:40.516032  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:40.516099  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:56:40.516097  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:56:40.516114  191080 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:56:40.517202  191080 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:40.517226  191080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:56:40.517560  191080 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1211 23:56:40.517676  191080 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:56:40.517948  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:40.517967  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:56:40.519009  191080 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:56:40.519029  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:56:40.519106  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:56:40.520326  191080 out.go:179]   - Using image docker.io/busybox:stable
	I1211 23:56:40.521667  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:56:40.521748  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:40.521773  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:56:40.523191  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.524446  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:56:40.524538  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525508  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.525522  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525556  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526184  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526857  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.526995  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526987  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.527300  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:56:40.526876  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528176  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528215  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528450  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.528655  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528687  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528793  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.529400  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530020  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:56:40.530078  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530252  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.530288  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531125  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531509  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.531550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.531581  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531691  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532336  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.532490  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532676  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532971  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533016  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.533392  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:56:40.533786  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.533419  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533922  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534209  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534245  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534763  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534785  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.534834  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534900  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535083  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.535167  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535342  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535606  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:56:40.535631  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:56:40.535965  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536268  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536305  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536400  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536418  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536548  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536583  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536615  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536653  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536963  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536994  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.537838  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.537879  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.538098  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.540825  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541431  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.541502  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541709  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	W1211 23:56:41.043758  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043809  191080 retry.go:31] will retry after 311.842554ms: ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	W1211 23:56:41.043894  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043909  191080 retry.go:31] will retry after 329.825082ms: ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.808354  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:56:41.808403  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:56:41.861654  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:56:41.861692  191080 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:56:41.896943  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:41.918961  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:41.924444  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:41.946144  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:42.009856  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:56:42.009896  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:56:42.018699  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:42.069883  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:42.072418  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:42.145123  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:42.186767  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:56:42.186812  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:56:42.259103  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:42.428120  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.93183404s)
	I1211 23:56:42.428248  191080 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.925817571s)
	I1211 23:56:42.428352  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:42.428498  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:56:42.452426  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:56:42.452489  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:56:42.484208  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:56:42.484275  191080 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:56:42.588545  191080 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:56:42.588585  191080 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:56:42.633670  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:56:42.633723  191080 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:56:42.637947  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:42.706175  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:56:42.706217  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:56:42.968807  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:56:42.968847  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:56:43.007497  191080 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.007532  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:56:43.028368  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:56:43.028403  191080 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:56:43.092788  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.092826  191080 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:56:43.128649  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:56:43.128687  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:56:43.289535  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:56:43.289580  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:56:43.346982  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:43.347023  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:56:43.401818  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.523249  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.586597  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:56:43.586642  191080 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:56:43.774067  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:56:43.774118  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:56:43.801000  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:44.025438  191080 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.025490  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:56:44.174620  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.277572584s)
	I1211 23:56:44.174769  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.250262195s)
	I1211 23:56:44.193708  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:56:44.193737  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:56:44.555609  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.920026  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:56:44.920060  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:56:45.697268  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:56:45.697305  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:56:46.254763  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:56:46.254799  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:56:46.581598  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:46.581642  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:56:46.687719  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:47.971016  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:56:47.975173  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976154  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:47.976199  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976614  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.491380  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:56:48.692419  191080 addons.go:239] Setting addon gcp-auth=true in "addons-081397"
	I1211 23:56:48.692544  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:48.695342  191080 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:56:48.698779  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699427  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:48.699601  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699980  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.892556  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.973548228s)
	I1211 23:56:49.408333  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.462135831s)
	I1211 23:56:49.408425  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.389664864s)
	I1211 23:56:51.938139  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.865666864s)
	I1211 23:56:51.938187  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.793007267s)
	I1211 23:56:51.938385  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.679223761s)
	I1211 23:56:51.938486  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.509912418s)
	I1211 23:56:51.938505  191080 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.510132207s)
	I1211 23:56:51.938523  191080 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:56:51.938693  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.300704152s)
	I1211 23:56:51.938740  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.868817664s)
	I1211 23:56:51.938763  191080 addons.go:495] Verifying addon ingress=true in "addons-081397"
	I1211 23:56:51.938775  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.536910017s)
	I1211 23:56:51.938799  191080 addons.go:495] Verifying addon registry=true in "addons-081397"
	I1211 23:56:51.939144  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.415830154s)
	I1211 23:56:51.939191  191080 addons.go:495] Verifying addon metrics-server=true in "addons-081397"
	I1211 23:56:51.939242  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.138197843s)
	I1211 23:56:51.939362  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.383652629s)
	W1211 23:56:51.939405  191080 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939434  191080 retry.go:31] will retry after 326.794424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939960  191080 node_ready.go:35] waiting up to 6m0s for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.941538  191080 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-081397 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:56:51.941540  191080 out.go:179] * Verifying registry addon...
	I1211 23:56:51.941553  191080 out.go:179] * Verifying ingress addon...
	I1211 23:56:51.943990  191080 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:56:51.944213  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:56:51.964791  191080 node_ready.go:49] node "addons-081397" is "Ready"
	I1211 23:56:51.964839  191080 node_ready.go:38] duration metric: took 24.813054ms for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.964861  191080 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:56:51.964931  191080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:56:52.001706  191080 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:56:52.001747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.002821  191080 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:56:52.002849  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.266441  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:52.467902  191080 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-081397" context rescaled to 1 replicas
	I1211 23:56:52.469927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.473967  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.974199  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.067246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.644012  191080 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.948623338s)
	I1211 23:56:53.644102  191080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.679150419s)
	I1211 23:56:53.644155  191080 api_server.go:72] duration metric: took 13.147840239s to wait for apiserver process to appear ...
	I1211 23:56:53.644280  191080 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:56:53.644328  191080 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I1211 23:56:53.644007  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.956173954s)
	I1211 23:56:53.644412  191080 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:53.646266  191080 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:56:53.647231  191080 out.go:179] * Verifying csi-hostpath-driver addon...
	I1211 23:56:53.648911  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:53.650424  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:56:53.650455  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:56:53.650539  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:56:53.695860  191080 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I1211 23:56:53.698147  191080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:56:53.698187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.714330  191080 api_server.go:141] control plane version: v1.34.2
	I1211 23:56:53.714403  191080 api_server.go:131] duration metric: took 70.105256ms to wait for apiserver health ...
	I1211 23:56:53.714423  191080 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:56:53.722159  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:56:53.722205  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:56:53.741176  191080 system_pods.go:59] 20 kube-system pods found
	I1211 23:56:53.741243  191080 system_pods.go:61] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.741269  191080 system_pods.go:61] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.741279  191080 system_pods.go:61] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.741289  191080 system_pods.go:61] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.741297  191080 system_pods.go:61] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.741307  191080 system_pods.go:61] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.741316  191080 system_pods.go:61] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.741323  191080 system_pods.go:61] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.741330  191080 system_pods.go:61] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.741340  191080 system_pods.go:61] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.741347  191080 system_pods.go:61] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.741358  191080 system_pods.go:61] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.741367  191080 system_pods.go:61] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.741382  191080 system_pods.go:61] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.741390  191080 system_pods.go:61] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.741401  191080 system_pods.go:61] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.741414  191080 system_pods.go:61] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.741427  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741445  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741455  191080 system_pods.go:61] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.741497  191080 system_pods.go:74] duration metric: took 27.063753ms to wait for pod list to return data ...
	I1211 23:56:53.741514  191080 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:56:53.789135  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.789157  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:56:53.793775  191080 default_sa.go:45] found service account: "default"
	I1211 23:56:53.793806  191080 default_sa.go:55] duration metric: took 52.279991ms for default service account to be created ...
	I1211 23:56:53.793821  191080 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:56:53.844257  191080 system_pods.go:86] 20 kube-system pods found
	I1211 23:56:53.844307  191080 system_pods.go:89] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.844317  191080 system_pods.go:89] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.844326  191080 system_pods.go:89] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.844334  191080 system_pods.go:89] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.844340  191080 system_pods.go:89] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.844352  191080 system_pods.go:89] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.844358  191080 system_pods.go:89] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.844364  191080 system_pods.go:89] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.844369  191080 system_pods.go:89] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.844377  191080 system_pods.go:89] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.844387  191080 system_pods.go:89] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.844394  191080 system_pods.go:89] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.844407  191080 system_pods.go:89] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.844416  191080 system_pods.go:89] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.844429  191080 system_pods.go:89] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.844439  191080 system_pods.go:89] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.844475  191080 system_pods.go:89] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.844488  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844498  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844507  191080 system_pods.go:89] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.844519  191080 system_pods.go:126] duration metric: took 50.689154ms to wait for k8s-apps to be running ...
	I1211 23:56:53.844532  191080 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:56:53.844608  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:56:53.902002  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.955676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.955845  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.160809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.448357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.453907  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.660400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.960099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.962037  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.993140  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.726594297s)
	I1211 23:56:54.993153  191080 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.148518984s)
	I1211 23:56:54.993221  191080 system_svc.go:56] duration metric: took 1.148683395s WaitForService to wait for kubelet
	I1211 23:56:54.993231  191080 kubeadm.go:587] duration metric: took 14.496919105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:56:54.993249  191080 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:56:55.001998  191080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1211 23:56:55.002046  191080 node_conditions.go:123] node cpu capacity is 2
	I1211 23:56:55.002095  191080 node_conditions.go:105] duration metric: took 8.839368ms to run NodePressure ...
	I1211 23:56:55.002114  191080 start.go:242] waiting for startup goroutines ...
	I1211 23:56:55.161169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.517092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.539796  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.579689  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.677622577s)
	I1211 23:56:55.581053  191080 addons.go:495] Verifying addon gcp-auth=true in "addons-081397"
	I1211 23:56:55.583166  191080 out.go:179] * Verifying gcp-auth addon...
	I1211 23:56:55.585775  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:56:55.610126  191080 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:56:55.610157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.684117  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.957671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.958053  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.094446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.159426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.454250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.454305  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.593123  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.698651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.955164  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.955254  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.097317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.160266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.455193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.593869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.657455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.952124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.953630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.091657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.192765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.448854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.454640  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.590861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.656664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.951563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.951970  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.092726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.156085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.453106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.455110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.594050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.659663  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.950597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.953854  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.098806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.158739  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.451426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.656305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.954392  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.957143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.089837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.157925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.451549  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.451947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.592758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.950524  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.091801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.155816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.449634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.450369  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.591242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.655088  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.952327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.952622  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.090558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.166505  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.449517  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.450499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.590638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.656141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.950487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.950653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.092052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.164233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.452727  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.453010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.590564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.658766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.956776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.960214  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.089595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.158346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.454648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.455366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.589445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.725092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.950042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.953003  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.093507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.156581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.448896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.452118  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.589736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.660370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.952602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.952699  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.093794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.159924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.452486  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.593007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.655785  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.955585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.955714  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.092772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.159691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.452421  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.453004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.596649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.657754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.151194  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.163928  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.166605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.166806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.452575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.452859  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.591132  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.658223  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.953976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.958754  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.097815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.160643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.449852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.449848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.593346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.655349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.951129  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.958386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.091038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.163797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.451681  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.455196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.594544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.665061  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.951173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.952848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.093150  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.157974  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.452312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.591441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.661703  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.958989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.960103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.089485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.156074  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.452932  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.453001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.592446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.658121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.962529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.963557  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.091969  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.158221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.449389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.450691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.594295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.659320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.949072  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.952087  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.089407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.155332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.813442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.813494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.813503  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.813799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.954853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.957241  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.091368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.157225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.462043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.465005  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.590303  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.693434  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.948523  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.948597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.090370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.155629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.450403  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.450602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.592008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.656775  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.952011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.953801  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.090174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.155951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.447617  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.448323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.590230  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.656537  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.948537  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.090670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.156440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.448193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.449148  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.589950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.655094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.949387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.950227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.155631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.448262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.449009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.655779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.952599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.952790  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.090743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.154683  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.451260  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.452256  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.593154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.656811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.109419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.111778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.111954  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.158011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.452303  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.452748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.590963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.655856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.949568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.949619  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.091094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.155741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.449880  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.449919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.590590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.658406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.948819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.949527  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.090686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.154696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.449105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.449431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.591490  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.656162  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.948671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.948867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.089628  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.157506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.448637  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.449144  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.589959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.654962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.949839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.950510  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.091561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.448681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.590622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.657217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.948184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.950039  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.089200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.155324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.449676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.449798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.590267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.655290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.948982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.090233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.155268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.448106  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.448387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.589756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.656215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.948715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.949727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.155563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.448981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.449967  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.589372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.656746  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.951190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.951266  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.089966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.156024  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.449807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:30.449940  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.592795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.655965  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.949686  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.949854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.089144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.155728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.448249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.451576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.590176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.656389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.949905  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.950451  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.090191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.156400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.449602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:32.449836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.591164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.657213  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.948520  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.948804  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.089649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.156050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.590456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.656274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.949256  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.949347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.091203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.156547  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.450354  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.450411  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:34.591349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.656156  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.948431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.948893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.089378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.156784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.450919  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:35.451766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.589587  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.656818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.949417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.950715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.090779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.155710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.452002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.452240  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.590343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.655697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.949354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.949385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.091333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.155660  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.448936  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.449075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.590116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.656050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.949528  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.950239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.156630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.449400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.449825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.590511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.655832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.948985  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.949093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.090158  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.155820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.449629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.451242  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.590400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.656829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.949106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.089281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.156612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.450580  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.590980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.655008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.949712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.949853  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.089939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.155401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.448080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.451541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.590421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.656608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.950025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.950358  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.090340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.159954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.450058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:42.450329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.589818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.655716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.948985  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.952252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.090380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.155314  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.450015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.450202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.655086  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.948401  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.949453  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.090744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.154784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:44.449642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.590645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.656686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.950021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.951009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.090020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.155822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:45.449646  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.656192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.949128  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.949580  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.091176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.155290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.448997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.450442  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:46.590802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.654435  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.949893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.950255  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.091631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.156353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.450093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.455744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.622817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.657485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.951291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.953670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.093758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.155393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.452298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.452366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.592111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.657572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.951626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.952512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.091082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.157173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.452908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.453973  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.591765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.699112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.951994  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.953086  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.090983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.162358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.452611  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.453823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.593450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.664907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.961300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.961709  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.105008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.168542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.460773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.463367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.596820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.659982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.954007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.956978  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.090564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.156735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.459306  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.461605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.591646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.659476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.949249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.949360  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.091342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.158735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.451408  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.454585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.590776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.656237  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.954524  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.954679  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.095794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.159448  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.576047  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.576308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.590001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.659406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.950589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.950691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.092084  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.157456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.451531  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.451907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:55.590653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.655648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.949374  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.953638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.090027  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.156602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.448573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:56.448625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.593728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.658937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.952879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.952929  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.091934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.159057  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.451436  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:57.455516  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.591262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.659040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.954096  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.955115  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.092045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.156829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.449510  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:58.452029  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.591835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.655523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.950729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.951027  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.091806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.192766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.450923  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.450927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:59.589799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.654677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.950001  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.950014  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.090853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.157042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.448336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:00.448337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.592094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.658087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.957344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.957336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.092515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.156002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.448332  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.450557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.590308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.655760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.948943  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.948994  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.090034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.155101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.448750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.451925  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.591378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.692860  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.948711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.949373  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.090905  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.155274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.564036  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.566077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:03.589166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.656333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.950104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.951138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.090344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.155950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.449528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:04.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.655882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.949372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.949508  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.090348  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.156443  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.449652  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.449659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.590664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.657339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.948372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.949962  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.090065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.157993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.447621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.447687  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.589658  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.656748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.950654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.952348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.090424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.154888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.449307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.449391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.655886  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.949784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.950390  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.090645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.154567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.450533  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.451325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.590268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.657358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.950295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.950733  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.091051  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.155807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.449202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.449232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.590096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.654983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.950294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.950637  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.155487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.449477  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.450235  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.592429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.655383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.950193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.951385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.090841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.154640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.448065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:11.448340  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.590017  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.656300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.950170  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.950312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.156842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.450055  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:12.451008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.590233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.656044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.950138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.950258  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.090444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.155597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.449740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.449778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.591284  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.948617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.949836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.090622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.156895  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.450176  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.589623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.656671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.950841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.951121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.090529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.155811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.449246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.449410  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.591082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.656904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.949103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.949272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.090640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.155039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.447514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.449003  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.589821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.655674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.952654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.953063  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.091612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.159499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.449631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:17.449881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.590494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.655629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.951351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.951511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.090316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.155509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.450535  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:18.451342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.591041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.655519  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.949171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.949503  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.089765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.155836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.449076  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.452236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.590791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.655570  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.949527  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.949612  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.090142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.154962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.448016  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:20.450402  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.589309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.655296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.949277  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.951681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.089881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.154879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.448360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.448858  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.589856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.655417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.949400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.949574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.090271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.155368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.449742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.450560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:22.591054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.656707  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.950712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.950890  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.091160  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.451079  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:23.451281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.590720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.654815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.950160  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.950337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.090330  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.156001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.447566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.450052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:24.591509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.656932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.949405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.449568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.450447  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.654957  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.950271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.951174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.091002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.155568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.449372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.449561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.590898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.656087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.951452  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.953541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.091542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.155995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.451595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.452488  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.591591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.657762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.949590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.952182  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.090479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.155291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.450004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.590103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.655339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.953363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.954717  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.093694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.155028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.449055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.450347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.590581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.656654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.950515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.950799  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.090326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.155485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.448572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.449692  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.590878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.655807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.956951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.957577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.092534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.155903  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.449802  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.450326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:31.593269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.656218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.949934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.091982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.155603  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.449522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.451425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:32.590687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.655082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.950545  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.950713  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.091712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.156900  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:33.451121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.592756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.655387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.956059  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.956346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.155676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:34.449255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.589931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.655778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.950791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.951042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.089716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.155182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.447641  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:35.590101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.949158  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.951312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.090687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.448272  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.448489  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.591352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.657569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.950696  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.952142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.090121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.155891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.448859  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:37.449811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.589598  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.655164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.950606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.950726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.089931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.155402  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.449956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:38.450889  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.590982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.655741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.950070  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.950118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.090737  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.156071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.448413  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:39.448760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.590316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.655228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.948192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.948232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.089574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.156012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.448864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.451601  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.592083  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.656209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.949127  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.091091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.449778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:41.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.589659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.656116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.949174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.949802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.090816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.155802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.450496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.452958  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.591015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.655595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.949982  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.091554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.155772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.451215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.451399  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:43.590489  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.655665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.949328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.950974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.092276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.155455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.449429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:44.449512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.591046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.949500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.951599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.094722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.154774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.449770  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:45.451691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.590761  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.655352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.949864  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.156103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.449181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.449779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.591976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.655596  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.949173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.950623  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.093977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.156056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.450281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:47.450897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.591849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.655891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.950318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.951578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.091959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.154872  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.450075  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.451948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.589733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.655026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.947902  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.948922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.090363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.449018  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:49.449294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.589648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.654518  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.949085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.949327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.089715  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.155336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.450276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:50.450610  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.590265  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.655617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.951287  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.155403  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.449820  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.451010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.591075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.654839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.949284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.950009  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.090582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.157494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.448608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.450368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:52.590998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.655180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.948718  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.950284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.090712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.158605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.451168  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.451536  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.589760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.657022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.948734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.951371  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.090202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.155484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.448582  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.450090  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.589620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.656268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.950155  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.950342  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.092526  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.155567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.448897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.450647  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:55.590184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.656034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.948843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.092633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.155535  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.449050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.450032  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:56.589978  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.655578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.951227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.951391  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.089968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.156011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.449111  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.449543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.591323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.656295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.949838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.950157  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.090263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.155586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.450591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:58.450796  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.590735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.655042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.948769  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.949101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.089480  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.156356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.450318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.452097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.589757  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.656038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.951264  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.955025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.093307  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.169810  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.453668  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.453747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.591664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.662082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.958327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.958678  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.093618  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.191821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.455185  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.458398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.593233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.657309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.950520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.956319  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.092841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.158368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.454368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.454386  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:02.592341  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.658118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.969970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.970262  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.091543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.193034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.478206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.494398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.601217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.659210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.956276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.961174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.090843  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.154383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.451688  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.451709  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:04.590930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.656263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.949363  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.950133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.102487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.156487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.456245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.457922  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:05.596196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.660935  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.949095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.954162  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.098801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.161484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.448923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:06.452592  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.590210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.659607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.954480  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.955630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.094202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.161252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.451546  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:07.451627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.599662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.951554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.951751  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.096946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.157724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.453200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.453207  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.592711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.695126  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.958140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.958561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.090111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.155633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.450116  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:09.595338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.656262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.950903  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.951773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.089779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.155949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.448520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.449409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.599275  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.659673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.948979  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.950560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.090875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.155105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.449575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:11.450246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.600631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.658293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.950730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.950966  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.090374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.157299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.449320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.449345  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:12.593214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.664092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.950600  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.951052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.154911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.450841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.450957  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.592263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.655883  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.948080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.948214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.089646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.157040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.448769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.449141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:14.590729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.654626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.949399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.951103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.092446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.156294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.452499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:15.452500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.590621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.657627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.951795  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.952077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.089680  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.156176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.448324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.448431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.656743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.948906  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.949692  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.091257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.155187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.450607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.450848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.589365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.655407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.948856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.949516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.091507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.155888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.451560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.452505  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:18.590970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.655165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.947845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.949425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.090758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.154844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.450275  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.451846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.589989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.655133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.950045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.950331  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.090153  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.155708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:20.591627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.655924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.947974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.948912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.089422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.155655  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.449734  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:21.589919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.657291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.949251  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.952143  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.091386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.157354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.448913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.449226  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.590529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.657745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.948933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.089214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.157137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.450670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.450902  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.590522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.656379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.950625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.091355  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.157165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.448825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.453054  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.590234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.657541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.949335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.951024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.092211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.154931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.448939  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.589597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.948738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.949046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.091849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.154651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.448387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.448440  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.590516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.656196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.949998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.950307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.090220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.156297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.450874  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:27.451092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.655496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.949431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.951612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.090350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.155161  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.448979  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.449151  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:28.589861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.655413  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.949789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.951855  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.090331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.157070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.449482  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.450006  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.590813  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.655573  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.949907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.950025  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.091458  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.158405  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.447779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.448834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:30.591091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.655875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.950684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.953289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.091332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.448823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:31.450781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.591809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.656075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.948759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.948968  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.091729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.154747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.449837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.590571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.655646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.949282  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.949595  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.090694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.155167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.451071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.591171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.656119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.949262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.949454  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.090283  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.155781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.450392  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.451683  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.591571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.655909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.949219  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.949408  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.089980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.154740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.450095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:35.450349  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.591227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.692481  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.949141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.951867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.090822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.156098  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.448722  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:36.449538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.589624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.657137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.948984  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.949366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.091350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.157094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.448182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:37.450253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.656425  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.948975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.089759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.155828  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.451552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.451647  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:38.589973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.655877  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.091390  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.405012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.452050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:39.452196  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.595044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.665344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.953209  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.953555  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.092147  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.155320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.451110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.451951  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.591316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.655931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.950017  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.951388  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.090401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.155143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.448442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.449115  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:41.591565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.656306  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.949112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.949534  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.091549  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.155830  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.449887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.450125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:42.591409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.658038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.948502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.951166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.090200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.450320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:43.450913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.592334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.656125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.948166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.949168  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.089675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.155311  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.447960  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.449667  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.592196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.655822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.952752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.952747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.155289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.448550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:45.449206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.593908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.656032  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.949589  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.949968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.089906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.156255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.448239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:46.448309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.590954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.656897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.952416  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.090308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.156374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.449653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:47.450249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.655793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.948702  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.948870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.089879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.155753  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.448357  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.450383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.590577  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.656031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.948556  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.950049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.089412  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.449163  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:49.449205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.590039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.655560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.949665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.950181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.155293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.448667  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:50.449257  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.590165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.655541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.950136  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.951139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.092122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.155044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.448983  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.449212  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.595578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.696454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.949343  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.949398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.090651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.156291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.449203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.449249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.590754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.654991  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.948372  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.948385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.091609  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.156662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.451318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:53.590507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.658421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.949207  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.949267  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.090069  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.155373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.448514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.449541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.591594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.656653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.949522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.950322  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.092501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.156404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.449073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.449090  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.590073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.662793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.950067  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.950494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.089914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.155999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.449360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.449507  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.590986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.655362  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.949305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.950892  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.090093  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.156315  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.449124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.449348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.589882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.949449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.949662  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.090373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.155727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.449522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.450610  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:58.592143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.656332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.948528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.949696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.090864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.155322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.450350  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.450722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.590799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.655636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.950576  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.950769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.090194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.156754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.449577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:00.450369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.591719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.655897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.950338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.950455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.090278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.156266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.452423  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.453174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.657914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.948554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.948798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.090601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.157198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.447995  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.448026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:02.590202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.657562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.949780  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.952121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.090613  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.155733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.449937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.449933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:03.590926  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.658062  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.949170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.949810  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.091414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.155665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.448744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.448999  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.589836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.656449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.948744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.948893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.091208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.156906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.449064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.449106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.590901  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.656845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.950206  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.950384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.090523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.155990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.449777  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.450837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.590182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.656853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.948285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.948607  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.089995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.449281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.590385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.656186  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.948484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.949766  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.090129  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.156334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.449099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:08.590056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.655353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.948870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.949837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.089440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.155221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.448572  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.449128  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.590937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.655774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.950643  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.091963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.157142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.447872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:10.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.590167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.655410  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.948681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.950881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.157845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.448987  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.451786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:11.589278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.656898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.951845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.089679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.448020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.448554  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.591529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.657808  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.949844  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.950340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.156837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.449856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:13.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.590325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.656633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.951242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.951287  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.089997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.155198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.448400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:14.448585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.590551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.656896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.951119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.090441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.155404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.451852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:15.452266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.591327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.656049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.952977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.953024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.093981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.156724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.451378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.592066  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.657270  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.948987  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.949080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.090484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.158533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.591576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.656835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.952242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.952334  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.091234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.156793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.450858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.451103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.590911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.655661  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.950840  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.091124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.154772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.449026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.451771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.590291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.657065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.951653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.089930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.156561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.448782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:20.453763  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.591804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.655268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.948366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.948454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.090508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.158196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.449441  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:21.590734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.655940  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.950169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.950328  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.089889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.157558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.449534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:22.449815  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.590378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.655963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.947894  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.948182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.090569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.156273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.450639  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.450816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.589218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.655281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.949543  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.949989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.090664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.155725  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.449198  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:24.451299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.590352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.656205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.947767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.948451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.090431  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.156379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.449358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.449672  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.654878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.949904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.950152  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.089797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.449336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.450596  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.592346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.657848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.949333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.950229  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.090752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.157107  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.449820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:27.450010  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.590514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.657927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.951547  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.952176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.156767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.451522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:28.591521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.656790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.949538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.949826  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.090055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.155834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.450097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:29.450167  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.590630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.655299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.949633  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.089708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.154762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.450366  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.655870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.948836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.948972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.089244  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.155334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.448853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.449043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.590675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.655919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.950253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.951767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.089750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.155423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.449657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:32.590358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.656797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.950269  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.090803  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.154674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.454615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.454885  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.589479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.656942  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.953188  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.954139  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.091629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:34.449071  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.589301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.656551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.948611  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.950196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.091634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.160584  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.448684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.449322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.589630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.655232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.947899  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.090521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.155599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.449031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.449382  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:36.591743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.655255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.948722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.949779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.090918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.157590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.448713  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.449843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.589677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.950644  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.093262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.156220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.448943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.450543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.591971  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.655424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.949892  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.951285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.090837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:39.450060  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.590012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.655544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.949824  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.954336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.095357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.155946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.451271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.452848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:40.590990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.655214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.963350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.967975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.092691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.157255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.461052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.464606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:41.592100  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.658218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.951346  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.953539  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.170296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.449833  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:42.449879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.589631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.655925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.952512  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.953941  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.090620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.155937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.449805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.451726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.655839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.949267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.950221  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.091825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.158335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.448909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:44.450502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.590179  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.656226  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.948916  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.950140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.089907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.156705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.449149  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.449285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.590294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.655955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.948817  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.951525  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.091170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.155968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.448814  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.450026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.590257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.655476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.950202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.950358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.091544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.156635  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.448759  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:47.450582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.591771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.655438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.951589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.951950  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.091551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.155719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.449736  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.449931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:48.590742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.656337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.951175  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.951871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.089625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.154672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.449387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:49.451177  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.589995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.655055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.947911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.948323  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.090498  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.448625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.449769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.589819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.656445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.952353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.952565  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.091804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.155910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.449736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.452867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.590141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.655015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.949047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.951778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.091793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.156022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.448369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.448494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.592499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.657185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.948673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.949634  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.092041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.157391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.451159  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.451297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.592141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.655589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.949414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.949654  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.090980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.157644  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.449578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:54.449942  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.592657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.655642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.949887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.950225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.155383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.448872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.450478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.655878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.950573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.951643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.090608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.156601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.449633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.449740  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:56.589768  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.656648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.951253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.951560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.090880  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.155285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.450738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.452046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.590263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.657500  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.950152  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.950364  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.091638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.193386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.450222  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:58.450351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.591052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.656102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.948215  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.948208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.090720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.449636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.450870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.589564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.656184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.948230  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.948326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.091313  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.155446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.449997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.590931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.655953  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.949430  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.949437  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.156208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.452056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.452238  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:01.590869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.655918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.949266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.950875  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.094697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.155278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.456102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.456104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.591508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.657972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.950365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.950787  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.155749  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.451192  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.451626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:03.592675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.657198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.949710  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.950534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.090705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.154619  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.450263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.451232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.589795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.654975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.951313  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.952632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.093185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.156889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.448891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.452008  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.589422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.655673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.954272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.955495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.090800  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.166615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.451261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.451837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:06.592681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.655679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.949675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.949685  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.156385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.449932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:07.450455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.590827  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.655109  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.950064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.090887  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.154572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.450690  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.450871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.590523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.655973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.948114  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.949750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.090989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.155955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.449016  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.449347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.590817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.656200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.950977  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.951430  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.091695  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.155672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.448805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.449149  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.591881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.655305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.948943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.949765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.089778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.156846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.450576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.451630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.591910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.657557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.949551  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.951423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.090384  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.160393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.453917  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.453927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:12.593211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.659806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.963298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.966253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.093949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.194867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.468169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:13.473451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.607669  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.664252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.963788  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.970682  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.100566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.183386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.481122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.481147  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:14.591963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.659279  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.953923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.957839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.091640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.160946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.453995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.454249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.592976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.657252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.952201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.954099  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.091133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.159988  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.451755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:16.452022  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.593102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.657191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.949980  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.950946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.091395  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.156727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.453497  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.454292  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.590745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.658023  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.953077  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.954404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.144325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.163884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.506329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:18.507416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.598801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.658864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.951533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.951768  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.091399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.157617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.453370  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.453419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:19.590356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.656750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.949694  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.952780  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.093710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.162888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.455842  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:20.457429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.597047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.658966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.952832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.956314  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.093605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.160516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.449838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.590229  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.657324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.951876  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.955993  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.093456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.156844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.452923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.453818  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.591894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.664786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.950056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.950755  191080 kapi.go:107] duration metric: took 4m31.006766325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:01:23.091356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.164794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.496726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:23.601172  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.663423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.954300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.094097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.156533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.450111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.590446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.655954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.951486  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.101144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.157114  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.459936  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.589209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.655404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.949290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.091205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.192561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.594301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.695112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.950968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.090419  191080 kapi.go:107] duration metric: took 4m31.504642831s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:01:27.092322  191080 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-081397 cluster.
	I1212 00:01:27.093973  191080 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:01:27.095595  191080 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:01:27.155630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.448192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.656676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.949602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.156035  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.452122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.656798  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.951030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.155812  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.450030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.655506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.950947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.156571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.657986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.952997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.155349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.449194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.657440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.950318  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.157071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.449726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.657033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.950261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.156773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.450869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.950125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.449419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.663651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.951541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.156031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.450253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.655842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.948990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.156446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.449076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.656334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.949221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.155204  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.448992  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.656232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.948670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.155550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.655986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.950165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.156285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.448380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.656058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.950214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.157325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.449511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.656623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.952375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.157648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.449624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.657125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.951249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.157745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.451135  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.657530  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.949771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.450113  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.950157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.156180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.450046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.655809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.950604  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.155614  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.448273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.656354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.950705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.156364  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.448416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.658552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.949651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.158180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.452700  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.656868  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.949912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.156755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.451939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.656432  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.950201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.156157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.448332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.656228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.950259  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.157269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.950248  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.156922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.449858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.658522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.950331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.158342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.452607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.657583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.952541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.156712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.452538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.656385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.949617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.154792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.450797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.655995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.950745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.155328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.448751  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.655216  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.949363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.157592  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.451921  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.664544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.958059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.156884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.449911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.659329  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.950478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.157728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.449728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.656867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.950675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.158989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.450999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.661594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.948955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.160422  191080 kapi.go:107] duration metric: took 5m6.50988483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:02:00.450671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.952269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.449529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.950781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.450250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.953623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.451822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.951054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.452684  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.952913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.449851  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.951096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.448632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.949689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.450190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.449834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.949956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.449343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.953533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.448912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.950203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.450650  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.950028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.950015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.449762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.949166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.450181  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.448817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.948583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.449804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.951493  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.450240  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.951299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.450677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.949706  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.449531  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.450374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.951339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.950909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.477937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.951049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.448664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.949615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.449359  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.949444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.450501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.450825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.948894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.950004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.450317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.949495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.456871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.948730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.449752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.950182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.450002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.948690  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.448231  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.950565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.450626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.949823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.450102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.449033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.949643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.948018  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.455139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.950450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.450566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.949291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.450245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.951396  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.451789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.949099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.450082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.954847  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.450792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.949191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.449125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.949110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.453064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.948748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.449176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.448415  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.950829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.450193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.950076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.450726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.949133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.448258  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.949440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.944788  191080 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1212 00:02:51.944836  191080 kapi.go:107] duration metric: took 6m0.000623545s to wait for kubernetes.io/minikube-addons=registry ...
	W1212 00:02:51.944978  191080 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1212 00:02:51.946936  191080 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1212 00:02:51.948508  191080 addons.go:530] duration metric: took 6m11.452163579s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass storage-provisioner inspektor-gadget cloud-spanner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1212 00:02:51.948603  191080 start.go:247] waiting for cluster config update ...
	I1212 00:02:51.948631  191080 start.go:256] writing updated cluster config ...
	I1212 00:02:51.949105  191080 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:51.959702  191080 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:51.966230  191080 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.976818  191080 pod_ready.go:94] pod "coredns-66bc5c9577-prc7f" is "Ready"
	I1212 00:02:51.976851  191080 pod_ready.go:86] duration metric: took 10.502006ms for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.982130  191080 pod_ready.go:83] waiting for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.989125  191080 pod_ready.go:94] pod "etcd-addons-081397" is "Ready"
	I1212 00:02:51.989162  191080 pod_ready.go:86] duration metric: took 7.000579ms for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.992364  191080 pod_ready.go:83] waiting for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.000110  191080 pod_ready.go:94] pod "kube-apiserver-addons-081397" is "Ready"
	I1212 00:02:52.000155  191080 pod_ready.go:86] duration metric: took 7.740136ms for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.004027  191080 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.365676  191080 pod_ready.go:94] pod "kube-controller-manager-addons-081397" is "Ready"
	I1212 00:02:52.365718  191080 pod_ready.go:86] duration metric: took 361.647196ms for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.569885  191080 pod_ready.go:83] waiting for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.966570  191080 pod_ready.go:94] pod "kube-proxy-jwqpk" is "Ready"
	I1212 00:02:52.966607  191080 pod_ready.go:86] duration metric: took 396.689665ms for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.167508  191080 pod_ready.go:83] waiting for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566695  191080 pod_ready.go:94] pod "kube-scheduler-addons-081397" is "Ready"
	I1212 00:02:53.566729  191080 pod_ready.go:86] duration metric: took 399.188237ms for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566746  191080 pod_ready.go:40] duration metric: took 1.607005753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:53.630859  191080 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:02:53.633243  191080 out.go:179] * Done! kubectl is now configured to use "addons-081397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:09:38 addons-081397 crio[814]: time="2025-12-12 00:09:38.968221114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498178968184215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd53a258-7d76-4f03-bf8b-4a2112a8c49d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:38 addons-081397 crio[814]: time="2025-12-12 00:09:38.969467833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0355531-558f-4417-a584-79e4a14d08a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:38 addons-081397 crio[814]: time="2025-12-12 00:09:38.969693250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0355531-558f-4417-a584-79e4a14d08a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:38 addons-081397 crio[814]: time="2025-12-12 00:09:38.970195405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Image
Spec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf6
77b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0355531-558f-4417-a584-79e4a14d08a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.019581214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b237101-076d-429f-8034-365a41c06f96 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.019663600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b237101-076d-429f-8034-365a41c06f96 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.022169539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3552af33-27d0-4bd1-a593-ae544ff047c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.024379877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498179024281937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3552af33-27d0-4bd1-a593-ae544ff047c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.025645266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf5f6e2e-a689-4db4-b22b-4fbfd9f88f1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.025727759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf5f6e2e-a689-4db4-b22b-4fbfd9f88f1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.026058420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Image
Spec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf6
77b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf5f6e2e-a689-4db4-b22b-4fbfd9f88f1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.072085856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f20f090-097f-445d-a064-a6b580124a5e name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.072450983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f20f090-097f-445d-a064-a6b580124a5e name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.074730319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=618a9075-7207-4e49-a936-97e5f5bb40b5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.075878956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498179075846667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=618a9075-7207-4e49-a936-97e5f5bb40b5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.077404223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a7e0528-7417-4a07-b732-a5d0d4639d9d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.077488281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a7e0528-7417-4a07-b732-a5d0d4639d9d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.077768253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Image
Spec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf6
77b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a7e0528-7417-4a07-b732-a5d0d4639d9d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.117451494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fa73015-16ec-40d6-ac52-0ca8d4c279a1 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.117562185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fa73015-16ec-40d6-ac52-0ca8d4c279a1 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.119542819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4ef9758-fbe5-4a96-ad0b-c7725e438ada name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.120886312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498179120855307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4ef9758-fbe5-4a96-ad0b-c7725e438ada name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.122407749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a483ba17-b839-45c7-89c5-a47d7788d119 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.122785669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a483ba17-b839-45c7-89c5-a47d7788d119 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:39 addons-081397 crio[814]: time="2025-12-12 00:09:39.123545306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Image
Spec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf6
77b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a483ba17-b839-45c7-89c5-a47d7788d119 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                     NAMESPACE
	825fa31ff05b6       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                5 minutes ago       Running             nginx                     0                   8c904991200ec       nginx                                   default
	25d63d362311b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e               6 minutes ago       Running             busybox                   0                   32cdf5109ec8d       busybox                                 default
	86266748a7014       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac   11 minutes ago      Running             registry-proxy            0                   4f522a691840e       registry-proxy-fdnc8                    kube-system
	0ee283d133145       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f          12 minutes ago      Running             amd-gpu-device-plugin     0                   d6396506b4332       amd-gpu-device-plugin-djxv6             kube-system
	636669d18a2e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                  12 minutes ago      Running             storage-provisioner       0                   d4c844a547362       storage-provisioner                     kube-system
	079f9768ce55c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                  12 minutes ago      Running             coredns                   0                   241bbeea7c618       coredns-66bc5c9577-prc7f                kube-system
	7f5ed4f373cfd       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                  12 minutes ago      Running             kube-proxy                0                   84c65d7d95ff4       kube-proxy-jwqpk                        kube-system
	7ace0e7fbfc94       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                  13 minutes ago      Running             kube-controller-manager   0                   f75b7d32aa473       kube-controller-manager-addons-081397   kube-system
	d8612fac71b8e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                  13 minutes ago      Running             kube-scheduler            0                   32069928e35e6       kube-scheduler-addons-081397            kube-system
	712e27a28f3ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                  13 minutes ago      Running             kube-apiserver            0                   78928c0146bf6       kube-apiserver-addons-081397            kube-system
	f00e427bcb7fb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                  13 minutes ago      Running             etcd                      0                   d442318c9ea69       etcd-addons-081397                      kube-system
	
	
	==> coredns [079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56] <==
	[INFO] 10.244.0.10:55965 - 36786 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000078851s
	[INFO] 10.244.0.10:35277 - 43137 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000226731s
	[INFO] 10.244.0.10:35277 - 7861 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000195089s
	[INFO] 10.244.0.10:35277 - 1195 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000262442s
	[INFO] 10.244.0.10:35277 - 51464 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000117373s
	[INFO] 10.244.0.10:35277 - 15303 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000094595s
	[INFO] 10.244.0.10:35277 - 64467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000215547s
	[INFO] 10.244.0.10:35277 - 23201 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00019896s
	[INFO] 10.244.0.10:35277 - 39512 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000113413s
	[INFO] 10.244.0.10:43788 - 44666 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.0002068s
	[INFO] 10.244.0.10:43788 - 30100 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000994478s
	[INFO] 10.244.0.10:43788 - 55423 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000449119s
	[INFO] 10.244.0.10:43788 - 13362 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079225s
	[INFO] 10.244.0.10:43788 - 39972 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000089237s
	[INFO] 10.244.0.10:43788 - 63348 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000072634s
	[INFO] 10.244.0.10:43788 - 48606 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000157977s
	[INFO] 10.244.0.10:43788 - 19088 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000078145s
	[INFO] 10.244.0.10:39328 - 11868 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000197346s
	[INFO] 10.244.0.10:39328 - 16531 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000491479s
	[INFO] 10.244.0.10:39328 - 14259 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000121451s
	[INFO] 10.244.0.10:39328 - 46206 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00043323s
	[INFO] 10.244.0.10:39328 - 1833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082853s
	[INFO] 10.244.0.10:39328 - 29389 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000231754s
	[INFO] 10.244.0.10:39328 - 11736 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00011893s
	[INFO] 10.244.0.10:39328 - 25805 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001169342s
	
	
	==> describe nodes <==
	Name:               addons-081397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-081397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-081397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-081397
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-081397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:09:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-081397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	System Info:
	  Machine ID:                 132f08c043de4a3fabcb9cf58535d902
	  System UUID:                132f08c0-43de-4a3f-abcb-9cf58535d902
	  Boot ID:                    7a0deef8-e8c7-4912-a254-b2bd4a5f2873
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  default                     hello-world-app-5d498dc89-gqw57                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 amd-gpu-device-plugin-djxv6                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-prc7f                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-081397                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-081397                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-081397                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-jwqpk                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-081397                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-6b586f9694-f9q5b                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-creds-764b6fb674-fn77c                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-fdnc8                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-081397 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-081397 event: Registered Node addons-081397 in Controller
	
	
	==> dmesg <==
	[Dec12 00:00] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.217370] kauditd_printk_skb: 65 callbacks suppressed
	[  +8.838372] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.897990] kauditd_printk_skb: 38 callbacks suppressed
	[ +21.981224] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 00:02] kauditd_printk_skb: 20 callbacks suppressed
	[Dec12 00:03] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.025958] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.337280] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.693930] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.887520] kauditd_printk_skb: 43 callbacks suppressed
	[  +1.616504] kauditd_printk_skb: 83 callbacks suppressed
	[Dec12 00:04] kauditd_printk_skb: 89 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.912354] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.453375] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 11 callbacks suppressed
	[Dec12 00:06] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.863558] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.805188] kauditd_printk_skb: 27 callbacks suppressed
	[  +0.008747] kauditd_printk_skb: 53 callbacks suppressed
	[Dec12 00:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 13 callbacks suppressed
	[Dec12 00:08] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529] <==
	{"level":"info","ts":"2025-12-11T23:57:54.563834Z","caller":"traceutil/trace.go:172","msg":"trace[62183190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.045886ms","start":"2025-12-11T23:57:54.444782Z","end":"2025-12-11T23:57:54.563827Z","steps":["trace[62183190] 'agreement among raft nodes before linearized reading'  (duration: 118.982426ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:57:54.564190Z","caller":"traceutil/trace.go:172","msg":"trace[2039299796] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"179.02635ms","start":"2025-12-11T23:57:54.385155Z","end":"2025-12-11T23:57:54.564182Z","steps":["trace[2039299796] 'process raft request'  (duration: 178.524709ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:57:54.565247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.428918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:57:54.565413Z","caller":"traceutil/trace.go:172","msg":"trace[222868242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.534642ms","start":"2025-12-11T23:57:54.445807Z","end":"2025-12-11T23:57:54.565342Z","steps":["trace[222868242] 'agreement among raft nodes before linearized reading'  (duration: 119.367809ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.552158Z","caller":"traceutil/trace.go:172","msg":"trace[1638119342] linearizableReadLoop","detail":"{readStateIndex:1095; appliedIndex:1096; }","duration":"156.418496ms","start":"2025-12-11T23:58:03.395726Z","end":"2025-12-11T23:58:03.552144Z","steps":["trace[1638119342] 'read index received'  (duration: 156.415444ms)","trace[1638119342] 'applied index is now lower than readState.Index'  (duration: 2.503µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-11T23:58:03.552301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.56477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.552320Z","caller":"traceutil/trace.go:172","msg":"trace[928892129] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1059; }","duration":"156.592939ms","start":"2025-12-11T23:58:03.395722Z","end":"2025-12-11T23:58:03.552315Z","steps":["trace[928892129] 'agreement among raft nodes before linearized reading'  (duration: 156.542706ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.554244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.397714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.555824Z","caller":"traceutil/trace.go:172","msg":"trace[949728136] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1059; }","duration":"111.983139ms","start":"2025-12-11T23:58:03.443830Z","end":"2025-12-11T23:58:03.555813Z","steps":["trace[949728136] 'agreement among raft nodes before linearized reading'  (duration: 110.370385ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.554796Z","caller":"traceutil/trace.go:172","msg":"trace[1547687040] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"112.058069ms","start":"2025-12-11T23:58:03.442727Z","end":"2025-12-11T23:58:03.554786Z","steps":["trace[1547687040] 'process raft request'  (duration: 111.966352ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.555039Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.923532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.556516Z","caller":"traceutil/trace.go:172","msg":"trace[1507526217] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"113.405565ms","start":"2025-12-11T23:58:03.443103Z","end":"2025-12-11T23:58:03.556508Z","steps":["trace[1507526217] 'agreement among raft nodes before linearized reading'  (duration: 111.826397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393302Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.001392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-12-11T23:59:39.393692Z","caller":"traceutil/trace.go:172","msg":"trace[1235881685] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1239; }","duration":"171.464156ms","start":"2025-12-11T23:59:39.222198Z","end":"2025-12-11T23:59:39.393662Z","steps":["trace[1235881685] 'range keys from in-memory index tree'  (duration: 170.767828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.832598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:59:39.393801Z","caller":"traceutil/trace.go:172","msg":"trace[1862727742] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1239; }","duration":"240.918211ms","start":"2025-12-11T23:59:39.152870Z","end":"2025-12-11T23:59:39.393789Z","steps":["trace[1862727742] 'range keys from in-memory index tree'  (duration: 240.669473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:01:18.494075Z","caller":"traceutil/trace.go:172","msg":"trace[729500464] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"106.783316ms","start":"2025-12-12T00:01:18.387266Z","end":"2025-12-12T00:01:18.494049Z","steps":["trace[729500464] 'process raft request'  (duration: 106.410306ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:02:27.300606Z","caller":"traceutil/trace.go:172","msg":"trace[636598247] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"178.765669ms","start":"2025-12-12T00:02:27.121805Z","end":"2025-12-12T00:02:27.300571Z","steps":["trace[636598247] 'process raft request'  (duration: 178.598198ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302340Z","caller":"traceutil/trace.go:172","msg":"trace[1017299845] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1944; }","duration":"211.137553ms","start":"2025-12-12T00:03:50.091151Z","end":"2025-12-12T00:03:50.302289Z","steps":["trace[1017299845] 'read index received'  (duration: 211.129428ms)","trace[1017299845] 'applied index is now lower than readState.Index'  (duration: 7.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:03:50.302716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.444735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:03:50.302750Z","caller":"traceutil/trace.go:172","msg":"trace[412680698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1831; }","duration":"211.595679ms","start":"2025-12-12T00:03:50.091146Z","end":"2025-12-12T00:03:50.302742Z","steps":["trace[412680698] 'agreement among raft nodes before linearized reading'  (duration: 211.378448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302919Z","caller":"traceutil/trace.go:172","msg":"trace[333147361] transaction","detail":"{read_only:false; response_revision:1832; number_of_response:1; }","duration":"278.806483ms","start":"2025-12-12T00:03:50.024100Z","end":"2025-12-12T00:03:50.302907Z","steps":["trace[333147361] 'process raft request'  (duration: 278.330678ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:06:28.833586Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1445}
	{"level":"info","ts":"2025-12-12T00:06:28.938406Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1445,"took":"103.595743ms","hash":3397286304,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4149248,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-12-12T00:06:28.938497Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3397286304,"revision":1445,"compact-revision":-1}
	
	
	==> kernel <==
	 00:09:39 up 13 min,  0 users,  load average: 1.73, 1.44, 1.04
	Linux addons-081397 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0] <==
	E1211 23:57:51.421606       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:51.423760       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	I1211 23:57:51.587823       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:03:28.639031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47840: use of closed network connection
	E1212 00:03:28.907630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47866: use of closed network connection
	I1212 00:03:38.672372       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.240.212"}
	I1212 00:03:52.460336       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:03:56.689684       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:03:56.942418       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.190.115"}
	I1212 00:04:08.084581       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 00:04:26.464334       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.464735       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.589812       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.589919       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.683731       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.683804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.703860       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.704083       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.747552       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.747633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 00:04:27.684485       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 00:04:27.749684       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1212 00:04:27.811321       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1212 00:06:30.996030       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:06:53.209224       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.181.204"}
	
	
	==> kube-controller-manager [7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1] <==
	E1212 00:07:11.884892       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:07:11.886599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:07:30.876170       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:07:30.877604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:07:34.360288       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:07:34.361732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:02.704051       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:02.706405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:05.298403       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:05.299999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:14.084834       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:14.086123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:44.822094       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:44.823361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:55.438710       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:55.440165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:09:07.709692       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:09:07.711433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:09:08.990053       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1212 00:09:23.991182       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1212 00:09:32.484358       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:09:32.485369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:09:36.091029       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:09:36.092317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:09:38.992322       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a] <==
	I1211 23:56:41.129554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 23:56:41.230792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 23:56:41.230832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.2"]
	E1211 23:56:41.230926       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:56:41.372420       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1211 23:56:41.372474       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:56:41.372505       1 server_linux.go:132] "Using iptables Proxier"
	I1211 23:56:41.403791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:56:41.404681       1 server.go:527] "Version info" version="v1.34.2"
	I1211 23:56:41.404798       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:56:41.409627       1 config.go:200] "Starting service config controller"
	I1211 23:56:41.409659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 23:56:41.409674       1 config.go:106] "Starting endpoint slice config controller"
	I1211 23:56:41.409677       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 23:56:41.409687       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 23:56:41.409690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 23:56:41.421538       1 config.go:309] "Starting node config controller"
	I1211 23:56:41.421577       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 23:56:41.421584       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 23:56:41.510201       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 23:56:41.510238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 23:56:41.510294       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379] <==
	E1211 23:56:31.058088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:31.058174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:31.058339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.058520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:31.058583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.878612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1211 23:56:31.916337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.929867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.934421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:31.956823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1211 23:56:31.994674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:32.004329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1211 23:56:32.010178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:32.026980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1211 23:56:32.052788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1211 23:56:32.154842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1211 23:56:32.220469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:32.267618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1211 23:56:32.308064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1211 23:56:32.344466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1211 23:56:32.371737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1211 23:56:32.397888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:32.548714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1211 23:56:32.628885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1211 23:56:34.946153       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:09:05 addons-081397 kubelet[1522]: E1212 00:09:05.281682    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498145281049415 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:14 addons-081397 kubelet[1522]: I1212 00:09:14.547523    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-fdnc8" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:09:15 addons-081397 kubelet[1522]: E1212 00:09:15.284622    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498155284118373 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:15 addons-081397 kubelet[1522]: E1212 00:09:15.284654    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498155284118373 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:16 addons-081397 kubelet[1522]: E1212 00:09:16.681449    1522 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
	Dec 12 00:09:16 addons-081397 kubelet[1522]: E1212 00:09:16.681508    1522 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
	Dec 12 00:09:16 addons-081397 kubelet[1522]: E1212 00:09:16.681732    1522 kuberuntime_manager.go:1449] "Unhandled Error" err="container registry start failed in pod registry-6b586f9694-f9q5b_kube-system(96c372a4-ae7e-4df5-9a48-525fc42f8bc5): ErrImagePull: reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:09:16 addons-081397 kubelet[1522]: E1212 00:09:16.681778    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ErrImagePull: \"reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-f9q5b" podUID="96c372a4-ae7e-4df5-9a48-525fc42f8bc5"
	Dec 12 00:09:25 addons-081397 kubelet[1522]: E1212 00:09:25.287768    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498165287182233 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:25 addons-081397 kubelet[1522]: E1212 00:09:25.287828    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498165287182233 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:31 addons-081397 kubelet[1522]: I1212 00:09:31.546275    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-f9q5b" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:09:31 addons-081397 kubelet[1522]: E1212 00:09:31.550580    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-f9q5b" podUID="96c372a4-ae7e-4df5-9a48-525fc42f8bc5"
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.577116    1522 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gvxm\" (UniqueName: \"kubernetes.io/projected/b792e8c5-5d38-4540-b39b-8c2a3f475c97-kube-api-access-5gvxm\") pod \"b792e8c5-5d38-4540-b39b-8c2a3f475c97\" (UID: \"b792e8c5-5d38-4540-b39b-8c2a3f475c97\") "
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.577214    1522 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b792e8c5-5d38-4540-b39b-8c2a3f475c97-config-volume\") pod \"b792e8c5-5d38-4540-b39b-8c2a3f475c97\" (UID: \"b792e8c5-5d38-4540-b39b-8c2a3f475c97\") "
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.577831    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b792e8c5-5d38-4540-b39b-8c2a3f475c97-config-volume" (OuterVolumeSpecName: "config-volume") pod "b792e8c5-5d38-4540-b39b-8c2a3f475c97" (UID: "b792e8c5-5d38-4540-b39b-8c2a3f475c97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.580492    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b792e8c5-5d38-4540-b39b-8c2a3f475c97-kube-api-access-5gvxm" (OuterVolumeSpecName: "kube-api-access-5gvxm") pod "b792e8c5-5d38-4540-b39b-8c2a3f475c97" (UID: "b792e8c5-5d38-4540-b39b-8c2a3f475c97"). InnerVolumeSpecName "kube-api-access-5gvxm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.678189    1522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gvxm\" (UniqueName: \"kubernetes.io/projected/b792e8c5-5d38-4540-b39b-8c2a3f475c97-kube-api-access-5gvxm\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.678228    1522 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b792e8c5-5d38-4540-b39b-8c2a3f475c97-config-volume\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:09:34 addons-081397 kubelet[1522]: I1212 00:09:34.884680    1522 scope.go:117] "RemoveContainer" containerID="c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b"
	Dec 12 00:09:35 addons-081397 kubelet[1522]: I1212 00:09:35.014596    1522 scope.go:117] "RemoveContainer" containerID="c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b"
	Dec 12 00:09:35 addons-081397 kubelet[1522]: E1212 00:09:35.016135    1522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b\": container with ID starting with c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b not found: ID does not exist" containerID="c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b"
	Dec 12 00:09:35 addons-081397 kubelet[1522]: I1212 00:09:35.016177    1522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b"} err="failed to get container status \"c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b\": rpc error: code = NotFound desc = could not find container \"c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b\": container with ID starting with c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b not found: ID does not exist"
	Dec 12 00:09:35 addons-081397 kubelet[1522]: E1212 00:09:35.291105    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498175290478208 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:35 addons-081397 kubelet[1522]: E1212 00:09:35.291154    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498175290478208 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:09:36 addons-081397 kubelet[1522]: I1212 00:09:36.557054    1522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b792e8c5-5d38-4540-b39b-8c2a3f475c97" path="/var/lib/kubelet/pods/b792e8c5-5d38-4540-b39b-8c2a3f475c97/volumes"
	
	
	==> storage-provisioner [636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529] <==
	W1212 00:09:14.925471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:16.929533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:16.940160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:18.945271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:18.955450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:20.960058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:20.971829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:22.977700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:22.986787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:24.992105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:25.003443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:27.010051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:27.018262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:29.023304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:29.034108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:31.039336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:31.049706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:33.054097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:33.064604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:35.069835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:35.076828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:37.080419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:37.090515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:39.094348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:39.102662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-081397 -n addons-081397
helpers_test.go:270: (dbg) Run:  kubectl --context addons-081397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc: exit status 1 (115.111194ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-gqw57
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-081397/192.168.39.2
	Start Time:       Fri, 12 Dec 2025 00:06:53 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk5gl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk5gl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m47s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-gqw57 to addons-081397
	  Warning  Failed     84s                  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     84s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    83s                  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     83s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    70s (x2 over 2m47s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjvf5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sjvf5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-6b586f9694-f9q5b" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-fn77c" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (363.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (189.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-081397 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-081397 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-081397 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [5452cb51-90f9-4bce-965c-64e57e2a83e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [5452cb51-90f9-4bce-965c-64e57e2a83e9] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 41.004732314s
I1212 00:04:37.969170  190272 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-081397 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.831326097s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-081397 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-081397 -n addons-081397
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 logs -n 25: (1.763526757s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-859495 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-859495                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-525167                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-449217                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-859495                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ --download-only -p binary-mirror-928519 --alsologtostderr --binary-mirror http://127.0.0.1:46143 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p binary-mirror-928519                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ addons  │ enable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ start   │ -p addons-081397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:02 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ enable headlamp -p addons-081397 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                         │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ ssh     │ addons-081397 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │                     │
	│ addons  │ addons-081397 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ip      │ addons-081397 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:51.508824  191080 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:51.508961  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.508968  191080 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:51.508973  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.509212  191080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1211 23:55:51.509810  191080 out.go:368] Setting JSON to false
	I1211 23:55:51.510832  191080 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20296,"bootTime":1765477056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:51.510906  191080 start.go:143] virtualization: kvm guest
	I1211 23:55:51.512916  191080 out.go:179] * [addons-081397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:51.514286  191080 notify.go:221] Checking for updates...
	I1211 23:55:51.514305  191080 out.go:179]   - MINIKUBE_LOCATION=22101
	I1211 23:55:51.515624  191080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:51.517281  191080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:51.518706  191080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.520288  191080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:55:51.521862  191080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:55:51.523574  191080 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:51.556952  191080 out.go:179] * Using the kvm2 driver based on user configuration
	I1211 23:55:51.558571  191080 start.go:309] selected driver: kvm2
	I1211 23:55:51.558600  191080 start.go:927] validating driver "kvm2" against <nil>
	I1211 23:55:51.558629  191080 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:55:51.559389  191080 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:51.559736  191080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:55:51.559767  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:55:51.559823  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:55:51.559835  191080 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:51.559888  191080 start.go:353] cluster config:
	{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1211 23:55:51.560015  191080 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:51.561727  191080 out.go:179] * Starting "addons-081397" primary control-plane node in "addons-081397" cluster
	I1211 23:55:51.563063  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:51.563108  191080 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:51.563116  191080 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:51.563256  191080 preload.go:238] Found /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:55:51.563274  191080 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 23:55:51.563705  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:55:51.563732  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json: {Name:mk3f56184a595aa65236de2721f264b9d77bbfd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:55:51.563928  191080 start.go:360] acquireMachinesLock for addons-081397: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:55:51.564001  191080 start.go:364] duration metric: took 52.499µs to acquireMachinesLock for "addons-081397"
	I1211 23:55:51.564027  191080 start.go:93] Provisioning new machine with config: &{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:55:51.564111  191080 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:55:51.566772  191080 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1211 23:55:51.567024  191080 start.go:159] libmachine.API.Create for "addons-081397" (driver="kvm2")
	I1211 23:55:51.567078  191080 client.go:173] LocalClient.Create starting
	I1211 23:55:51.567214  191080 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem
	I1211 23:55:51.634646  191080 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem
	I1211 23:55:51.761850  191080 main.go:143] libmachine: creating domain...
	I1211 23:55:51.761879  191080 main.go:143] libmachine: creating network...
	I1211 23:55:51.763511  191080 main.go:143] libmachine: found existing default network
	I1211 23:55:51.763716  191080 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.764419  191080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dae890}
	I1211 23:55:51.764553  191080 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-081397</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.771343  191080 main.go:143] libmachine: creating private network mk-addons-081397 192.168.39.0/24...
	I1211 23:55:51.876571  191080 main.go:143] libmachine: private network mk-addons-081397 192.168.39.0/24 created
	I1211 23:55:51.876999  191080 main.go:143] libmachine: <network>
	  <name>mk-addons-081397</name>
	  <uuid>f81ed5cb-0804-4477-9781-0372afa282e4</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:59:29:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.877044  191080 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:51.877068  191080 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1211 23:55:51.877078  191080 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.877153  191080 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22101-186349/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1211 23:55:52.159080  191080 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa...
	I1211 23:55:52.239938  191080 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk...
	I1211 23:55:52.239993  191080 main.go:143] libmachine: Writing magic tar header
	I1211 23:55:52.240026  191080 main.go:143] libmachine: Writing SSH key tar header
	I1211 23:55:52.240106  191080 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:52.240169  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397
	I1211 23:55:52.240206  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 (perms=drwx------)
	I1211 23:55:52.240215  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines
	I1211 23:55:52.240224  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:55:52.240232  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:52.240240  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube (perms=drwxr-xr-x)
	I1211 23:55:52.240250  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349
	I1211 23:55:52.240258  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349 (perms=drwxrwxr-x)
	I1211 23:55:52.240268  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:55:52.240275  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:55:52.240283  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1211 23:55:52.240291  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:55:52.240299  191080 main.go:143] libmachine: checking permissions on dir: /home
	I1211 23:55:52.240306  191080 main.go:143] libmachine: skipping /home - not owner
	I1211 23:55:52.240309  191080 main.go:143] libmachine: defining domain...
	I1211 23:55:52.242720  191080 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:52.249320  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:07:bd:c2 in network default
	I1211 23:55:52.250641  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:52.250680  191080 main.go:143] libmachine: starting domain...
	I1211 23:55:52.250686  191080 main.go:143] libmachine: ensuring networks are active...
	I1211 23:55:52.252166  191080 main.go:143] libmachine: Ensuring network default is active
	I1211 23:55:52.253166  191080 main.go:143] libmachine: Ensuring network mk-addons-081397 is active
	I1211 23:55:52.254226  191080 main.go:143] libmachine: getting domain XML...
	I1211 23:55:52.255944  191080 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <uuid>132f08c0-43de-4a3f-abcb-9cf58535d902</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2b:32:89'/>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:07:bd:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:53.688550  191080 main.go:143] libmachine: waiting for domain to start...
	I1211 23:55:53.691114  191080 main.go:143] libmachine: domain is now running
	I1211 23:55:53.691144  191080 main.go:143] libmachine: waiting for IP...
	I1211 23:55:53.692424  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.693801  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.693826  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.694334  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.694402  191080 retry.go:31] will retry after 260.574844ms: waiting for domain to come up
	I1211 23:55:53.957397  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.958627  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.958657  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.959170  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.959230  191080 retry.go:31] will retry after 343.725464ms: waiting for domain to come up
	I1211 23:55:54.305232  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.306166  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.306193  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.306730  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.306782  191080 retry.go:31] will retry after 478.083756ms: waiting for domain to come up
	I1211 23:55:54.787051  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.788263  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.788294  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.788968  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.789021  191080 retry.go:31] will retry after 586.83961ms: waiting for domain to come up
	I1211 23:55:55.378616  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:55.379761  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:55.379794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:55.380438  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:55.380514  191080 retry.go:31] will retry after 629.739442ms: waiting for domain to come up
	I1211 23:55:56.011678  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.012771  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.012794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.013869  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.013951  191080 retry.go:31] will retry after 838.290437ms: waiting for domain to come up
	I1211 23:55:56.853752  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.854450  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.854485  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.854918  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.854979  191080 retry.go:31] will retry after 1.020736825s: waiting for domain to come up
	I1211 23:55:57.877350  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:57.878104  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:57.878134  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:57.878522  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:57.878563  191080 retry.go:31] will retry after 1.394206578s: waiting for domain to come up
	I1211 23:55:59.275153  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:59.276377  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:59.276409  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:59.276994  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:59.277049  191080 retry.go:31] will retry after 1.4774988s: waiting for domain to come up
	I1211 23:56:00.757189  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:00.758049  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:00.758071  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:00.758450  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:00.758518  191080 retry.go:31] will retry after 1.704024367s: waiting for domain to come up
	I1211 23:56:02.464578  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:02.465672  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:02.465713  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:02.466390  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:02.466496  191080 retry.go:31] will retry after 2.558039009s: waiting for domain to come up
	I1211 23:56:05.028156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:05.029424  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:05.029476  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:05.030141  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:05.030218  191080 retry.go:31] will retry after 2.713185396s: waiting for domain to come up
	I1211 23:56:07.745837  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:07.746810  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:07.746835  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:07.747308  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:07.747359  191080 retry.go:31] will retry after 3.017005916s: waiting for domain to come up
	I1211 23:56:10.768106  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769156  191080 main.go:143] libmachine: domain addons-081397 has current primary IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769185  191080 main.go:143] libmachine: found domain IP: 192.168.39.2
	I1211 23:56:10.769196  191080 main.go:143] libmachine: reserving static IP address...
	I1211 23:56:10.769843  191080 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-081397", mac: "52:54:00:2b:32:89", ip: "192.168.39.2"} in network mk-addons-081397
	I1211 23:56:11.003302  191080 main.go:143] libmachine: reserved static IP address 192.168.39.2 for domain addons-081397
	I1211 23:56:11.003331  191080 main.go:143] libmachine: waiting for SSH...
	I1211 23:56:11.003337  191080 main.go:143] libmachine: Getting to WaitForSSH function...
	I1211 23:56:11.008569  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009090  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.009115  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009350  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.009619  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.009631  191080 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1211 23:56:11.126360  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.126895  191080 main.go:143] libmachine: domain creation complete
	I1211 23:56:11.129784  191080 machine.go:94] provisionDockerMachine start ...
	I1211 23:56:11.134589  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.135537  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.135574  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.136010  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.136277  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.136290  191080 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 23:56:11.257254  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1211 23:56:11.257302  191080 buildroot.go:166] provisioning hostname "addons-081397"
	I1211 23:56:11.261573  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262389  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.262457  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262926  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.263212  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.263234  191080 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-081397 && echo "addons-081397" | sudo tee /etc/hostname
	I1211 23:56:11.410142  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-081397
	
	I1211 23:56:11.414271  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.414882  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.414917  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.415210  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.415441  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.415482  191080 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-081397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-081397/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-081397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:56:11.555358  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.555395  191080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22101-186349/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-186349/.minikube}
	I1211 23:56:11.555420  191080 buildroot.go:174] setting up certificates
	I1211 23:56:11.555443  191080 provision.go:84] configureAuth start
	I1211 23:56:11.558885  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.559509  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.559565  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.562716  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563314  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.563346  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563750  191080 provision.go:143] copyHostCerts
	I1211 23:56:11.563901  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem (1123 bytes)
	I1211 23:56:11.564087  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem (1675 bytes)
	I1211 23:56:11.564163  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem (1082 bytes)
	I1211 23:56:11.564231  191080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem org=jenkins.addons-081397 san=[127.0.0.1 192.168.39.2 addons-081397 localhost minikube]
	I1211 23:56:11.604096  191080 provision.go:177] copyRemoteCerts
	I1211 23:56:11.604171  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:56:11.607337  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.607977  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.608015  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.608218  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:11.699591  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:56:11.739646  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:56:11.780870  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:56:11.821711  191080 provision.go:87] duration metric: took 266.231617ms to configureAuth
	I1211 23:56:11.821755  191080 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:56:11.822007  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:11.826045  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.826578  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826785  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.827068  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.827088  191080 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:56:12.345303  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:56:12.345334  191080 machine.go:97] duration metric: took 1.2155135s to provisionDockerMachine
	I1211 23:56:12.345348  191080 client.go:176] duration metric: took 20.778259004s to LocalClient.Create
	I1211 23:56:12.345369  191080 start.go:167] duration metric: took 20.77834555s to libmachine.API.Create "addons-081397"
	I1211 23:56:12.345379  191080 start.go:293] postStartSetup for "addons-081397" (driver="kvm2")
	I1211 23:56:12.345393  191080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:56:12.345498  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:56:12.350156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351165  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.351226  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351544  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.444149  191080 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:56:12.450354  191080 info.go:137] Remote host: Buildroot 2025.02
	I1211 23:56:12.450386  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/addons for local assets ...
	I1211 23:56:12.450452  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/files for local assets ...
	I1211 23:56:12.450508  191080 start.go:296] duration metric: took 105.122285ms for postStartSetup
	I1211 23:56:12.489061  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.489811  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.489855  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.490235  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:56:12.490597  191080 start.go:128] duration metric: took 20.9264692s to createHost
	I1211 23:56:12.493999  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494451  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.494490  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494674  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:12.494897  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:12.494909  191080 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:56:12.615405  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765497372.576443288
	
	I1211 23:56:12.615439  191080 fix.go:216] guest clock: 1765497372.576443288
	I1211 23:56:12.615447  191080 fix.go:229] Guest: 2025-12-11 23:56:12.576443288 +0000 UTC Remote: 2025-12-11 23:56:12.490625673 +0000 UTC m=+21.040527790 (delta=85.817615ms)
	I1211 23:56:12.615500  191080 fix.go:200] guest clock delta is within tolerance: 85.817615ms
	I1211 23:56:12.615508  191080 start.go:83] releasing machines lock for "addons-081397", held for 21.051491664s
	I1211 23:56:12.619172  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.619799  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.619831  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.620772  191080 ssh_runner.go:195] Run: cat /version.json
	I1211 23:56:12.620876  191080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:56:12.625375  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.625530  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626036  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626063  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626330  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626345  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.626381  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626618  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.717381  191080 ssh_runner.go:195] Run: systemctl --version
	I1211 23:56:12.749852  191080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:56:13.078529  191080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:56:13.088885  191080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:56:13.089007  191080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:56:13.118717  191080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:56:13.118763  191080 start.go:496] detecting cgroup driver to use...
	I1211 23:56:13.118864  191080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:56:13.148400  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:56:13.169798  191080 docker.go:218] disabling cri-docker service (if available) ...
	I1211 23:56:13.169888  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:56:13.191896  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:56:13.211802  191080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:56:13.376765  191080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:56:13.606305  191080 docker.go:234] disabling docker service ...
	I1211 23:56:13.606403  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:56:13.625180  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:56:13.643232  191080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:56:13.829218  191080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:56:14.000354  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:56:14.021612  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:56:14.050867  191080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 23:56:14.050963  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.068612  191080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:56:14.068701  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.086254  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.104697  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.123074  191080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:56:14.143227  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.161079  191080 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.188908  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.207821  191080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:56:14.223124  191080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:56:14.223216  191080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:56:14.252980  191080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:56:14.270522  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:14.430888  191080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:56:14.564516  191080 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:56:14.564671  191080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:56:14.574658  191080 start.go:564] Will wait 60s for crictl version
	I1211 23:56:14.574811  191080 ssh_runner.go:195] Run: which crictl
	I1211 23:56:14.580945  191080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:56:14.633033  191080 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:56:14.633155  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.669436  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.710252  191080 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1211 23:56:14.715883  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716478  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:14.716519  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716765  191080 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:56:14.724237  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:14.744504  191080 kubeadm.go:884] updating cluster {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:56:14.744646  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:56:14.744696  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:14.782232  191080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1211 23:56:14.782317  191080 ssh_runner.go:195] Run: which lz4
	I1211 23:56:14.788630  191080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:56:14.795116  191080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:56:14.795159  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1211 23:56:16.445424  191080 crio.go:462] duration metric: took 1.656827131s to copy over tarball
	I1211 23:56:16.445532  191080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:56:18.102205  191080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.656625041s)
	I1211 23:56:18.102245  191080 crio.go:469] duration metric: took 1.656768065s to extract the tarball
	I1211 23:56:18.102258  191080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:56:18.141443  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:18.189200  191080 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:18.189229  191080 cache_images.go:86] Images are preloaded, skipping loading
	I1211 23:56:18.189239  191080 kubeadm.go:935] updating node { 192.168.39.2 8443 v1.34.2 crio true true} ...
	I1211 23:56:18.189344  191080 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-081397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:56:18.189436  191080 ssh_runner.go:195] Run: crio config
	I1211 23:56:18.243325  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:18.243368  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:18.243392  191080 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 23:56:18.243429  191080 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-081397 NodeName:addons-081397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:56:18.243664  191080 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-081397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:56:18.243802  191080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 23:56:18.259378  191080 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 23:56:18.259504  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:56:18.274263  191080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1211 23:56:18.301193  191080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:56:18.326928  191080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1211 23:56:18.352300  191080 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:56:18.358187  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:18.378953  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:18.546541  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:18.581301  191080 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397 for IP: 192.168.39.2
	I1211 23:56:18.581326  191080 certs.go:195] generating shared ca certs ...
	I1211 23:56:18.581346  191080 certs.go:227] acquiring lock for ca certs: {Name:mkdc58adfd2cc299a76aeec81ac0d7f7d2a38e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.581537  191080 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key
	I1211 23:56:18.667363  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt ...
	I1211 23:56:18.667401  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt: {Name:mk1b55f33c9202ab57b68cfcba7feed18a5c869b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667594  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key ...
	I1211 23:56:18.667607  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key: {Name:mk31aac21dc0da02b77cc3d7268007e3ddde417b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667688  191080 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key
	I1211 23:56:18.787173  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt ...
	I1211 23:56:18.787207  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt: {Name:mk50e6f78e87c39b691065db3fbc22d4178cbab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787389  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key ...
	I1211 23:56:18.787400  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key: {Name:mk3201307c9797e697c52cf7944b78460ad79885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787484  191080 certs.go:257] generating profile certs ...
	I1211 23:56:18.787545  191080 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key
	I1211 23:56:18.787567  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt with IP's: []
	I1211 23:56:18.836629  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt ...
	I1211 23:56:18.836666  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: {Name:mk4cd9c65ec1631677a6989710916cca92666039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.836848  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key ...
	I1211 23:56:18.836869  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key: {Name:mk158319f878ba2a2974fa05c9c5e81406b1ff04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.837128  191080 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68
	I1211 23:56:18.837174  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I1211 23:56:18.895323  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 ...
	I1211 23:56:18.895360  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68: {Name:mka19cf3aa517a67c9823b9db6a0564ae2c88f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895568  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 ...
	I1211 23:56:18.895582  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68: {Name:mkcb32c8b3892cdbb32375c99cf73efb7e2d2ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895669  191080 certs.go:382] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt
	I1211 23:56:18.895740  191080 certs.go:386] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key
	I1211 23:56:18.895792  191080 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key
	I1211 23:56:18.895810  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt with IP's: []
	I1211 23:56:19.059957  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt ...
	I1211 23:56:19.059996  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt: {Name:mkeece2e2a9106cbaddd7935ae5c93b8b6536c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060202  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key ...
	I1211 23:56:19.060217  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key: {Name:mk7fa3201305a84265a30d592c7bfaa4ea9d3d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060422  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 23:56:19.060478  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:56:19.060506  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:56:19.060532  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem (1675 bytes)
	I1211 23:56:19.061341  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:56:19.104179  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:56:19.148345  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:56:19.191324  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 23:56:19.230603  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:56:19.274335  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:56:19.314103  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:56:19.355420  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:56:19.392791  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:56:19.429841  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:56:19.455328  191080 ssh_runner.go:195] Run: openssl version
	I1211 23:56:19.463919  191080 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.478287  191080 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 23:56:19.494141  191080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501262  191080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501357  191080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.511987  191080 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 23:56:19.527366  191080 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 23:56:19.544629  191080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:56:19.551139  191080 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:56:19.551211  191080 kubeadm.go:401] StartCluster: {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:56:19.551367  191080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:56:19.551501  191080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:56:19.601329  191080 cri.go:89] found id: ""
	I1211 23:56:19.601414  191080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:56:19.615890  191080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:56:19.632616  191080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:56:19.646731  191080 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:56:19.646765  191080 kubeadm.go:158] found existing configuration files:
	
	I1211 23:56:19.646828  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:56:19.660106  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:56:19.660190  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:56:19.676276  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:56:19.690027  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:56:19.690116  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:56:19.705756  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.720625  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:56:19.720715  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.735359  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:56:19.750390  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:56:19.750481  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:56:19.766951  191080 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:56:19.839756  191080 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 23:56:19.839847  191080 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 23:56:19.990602  191080 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:56:19.990863  191080 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:56:19.991043  191080 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:56:20.010193  191080 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:56:20.165972  191080 out.go:252]   - Generating certificates and keys ...
	I1211 23:56:20.166144  191080 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 23:56:20.166252  191080 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 23:56:20.166347  191080 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:56:20.551090  191080 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:56:20.773761  191080 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:56:21.138092  191080 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 23:56:21.423874  191080 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 23:56:21.424042  191080 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:21.781372  191080 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 23:56:21.781631  191080 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:22.783972  191080 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:56:22.973180  191080 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:56:23.396371  191080 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 23:56:23.396644  191080 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:56:23.822810  191080 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:56:24.134647  191080 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:56:24.293087  191080 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:56:24.542047  191080 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:56:24.865144  191080 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:56:24.865682  191080 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:56:24.869746  191080 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:56:24.871219  191080 out.go:252]   - Booting up control plane ...
	I1211 23:56:24.871351  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:56:24.871523  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:56:24.871597  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:56:24.889102  191080 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:56:24.889275  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 23:56:24.898513  191080 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 23:56:24.899113  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:56:24.899188  191080 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 23:56:25.090240  191080 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:56:25.090397  191080 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:56:26.591737  191080 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502403531s
	I1211 23:56:26.595003  191080 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1211 23:56:26.595170  191080 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.2:8443/livez
	I1211 23:56:26.595328  191080 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1211 23:56:26.595488  191080 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1211 23:56:29.712995  191080 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.118803589s
	I1211 23:56:31.068676  191080 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.475444759s
	I1211 23:56:33.595001  191080 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002476016s
	I1211 23:56:33.626020  191080 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:56:33.642768  191080 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:56:33.672411  191080 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:56:33.672732  191080 kubeadm.go:319] [mark-control-plane] Marking the node addons-081397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:56:33.697567  191080 kubeadm.go:319] [bootstrap-token] Using token: fx6xk6.14clsj7mtuippxxx
	I1211 23:56:33.699696  191080 out.go:252]   - Configuring RBAC rules ...
	I1211 23:56:33.699861  191080 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:56:33.705146  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:56:33.724431  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:56:33.735134  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:56:33.742267  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:56:33.751087  191080 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:56:34.005984  191080 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:56:34.545250  191080 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1211 23:56:35.004202  191080 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1211 23:56:35.005119  191080 kubeadm.go:319] 
	I1211 23:56:35.005179  191080 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1211 23:56:35.005184  191080 kubeadm.go:319] 
	I1211 23:56:35.005261  191080 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1211 23:56:35.005268  191080 kubeadm.go:319] 
	I1211 23:56:35.005289  191080 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1211 23:56:35.005347  191080 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:56:35.005431  191080 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:56:35.005483  191080 kubeadm.go:319] 
	I1211 23:56:35.005568  191080 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1211 23:56:35.005579  191080 kubeadm.go:319] 
	I1211 23:56:35.005647  191080 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:56:35.005662  191080 kubeadm.go:319] 
	I1211 23:56:35.005707  191080 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1211 23:56:35.005772  191080 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:56:35.005838  191080 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:56:35.005844  191080 kubeadm.go:319] 
	I1211 23:56:35.005915  191080 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:56:35.005983  191080 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1211 23:56:35.005989  191080 kubeadm.go:319] 
	I1211 23:56:35.006133  191080 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006283  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae \
	I1211 23:56:35.006317  191080 kubeadm.go:319] 	--control-plane 
	I1211 23:56:35.006322  191080 kubeadm.go:319] 
	I1211 23:56:35.006403  191080 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:56:35.006410  191080 kubeadm.go:319] 
	I1211 23:56:35.006504  191080 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006639  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae 
	I1211 23:56:35.009065  191080 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:56:35.009128  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:35.009169  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:35.012077  191080 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 23:56:35.013875  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 23:56:35.030825  191080 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 23:56:35.061826  191080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:56:35.061965  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.061967  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-081397 minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-081397 minikube.k8s.io/primary=true
	I1211 23:56:35.142016  191080 ops.go:34] apiserver oom_adj: -16
	I1211 23:56:35.257509  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.758327  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.257620  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.757733  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.258377  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.758134  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.258440  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.758050  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.258437  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.757704  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.258657  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.495051  191080 kubeadm.go:1114] duration metric: took 5.433189491s to wait for elevateKubeSystemPrivileges
	I1211 23:56:40.495110  191080 kubeadm.go:403] duration metric: took 20.943905559s to StartCluster
	I1211 23:56:40.495141  191080 settings.go:142] acquiring lock: {Name:mkc54bc00cde7f692cc672e67ab0af4ae6a15c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.495326  191080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:56:40.495951  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/kubeconfig: {Name:mkdf9d6588b522077beb3bc03f9eff4a2b248de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.496234  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:56:40.496280  191080 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:56:40.496340  191080 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:56:40.496488  191080 addons.go:70] Setting yakd=true in profile "addons-081397"
	I1211 23:56:40.496513  191080 addons.go:239] Setting addon yakd=true in "addons-081397"
	I1211 23:56:40.496519  191080 addons.go:70] Setting inspektor-gadget=true in profile "addons-081397"
	I1211 23:56:40.496555  191080 addons.go:239] Setting addon inspektor-gadget=true in "addons-081397"
	I1211 23:56:40.496571  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496571  191080 addons.go:70] Setting ingress=true in profile "addons-081397"
	I1211 23:56:40.496589  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.496605  191080 addons.go:239] Setting addon ingress=true in "addons-081397"
	I1211 23:56:40.496607  191080 addons.go:70] Setting metrics-server=true in profile "addons-081397"
	I1211 23:56:40.496619  191080 addons.go:70] Setting ingress-dns=true in profile "addons-081397"
	I1211 23:56:40.496623  191080 addons.go:239] Setting addon metrics-server=true in "addons-081397"
	I1211 23:56:40.496630  191080 addons.go:70] Setting cloud-spanner=true in profile "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting registry-creds=true in profile "addons-081397"
	I1211 23:56:40.496643  191080 addons.go:70] Setting gcp-auth=true in profile "addons-081397"
	I1211 23:56:40.496649  191080 addons.go:239] Setting addon cloud-spanner=true in "addons-081397"
	I1211 23:56:40.496652  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496658  191080 addons.go:239] Setting addon registry-creds=true in "addons-081397"
	I1211 23:56:40.496662  191080 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.496670  191080 mustload.go:66] Loading cluster: addons-081397
	I1211 23:56:40.496674  191080 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-081397"
	I1211 23:56:40.496687  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496694  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496707  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496846  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.497455  191080 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-081397"
	I1211 23:56:40.497568  191080 addons.go:70] Setting registry=true in profile "addons-081397"
	I1211 23:56:40.497576  191080 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:40.497609  191080 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-081397"
	I1211 23:56:40.497628  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497632  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-081397"
	I1211 23:56:40.497653  191080 addons.go:70] Setting volcano=true in profile "addons-081397"
	I1211 23:56:40.497674  191080 addons.go:239] Setting addon volcano=true in "addons-081397"
	I1211 23:56:40.497708  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497837  191080 addons.go:70] Setting volumesnapshots=true in profile "addons-081397"
	I1211 23:56:40.497852  191080 addons.go:239] Setting addon volumesnapshots=true in "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting default-storageclass=true in profile "addons-081397"
	I1211 23:56:40.497876  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497894  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-081397"
	I1211 23:56:40.496631  191080 addons.go:239] Setting addon ingress-dns=true in "addons-081397"
	I1211 23:56:40.498289  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496621  191080 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.498652  191080 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-081397"
	I1211 23:56:40.498685  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499011  191080 addons.go:70] Setting storage-provisioner=true in profile "addons-081397"
	I1211 23:56:40.499034  191080 addons.go:239] Setting addon storage-provisioner=true in "addons-081397"
	I1211 23:56:40.499062  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496606  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497596  191080 addons.go:239] Setting addon registry=true in "addons-081397"
	I1211 23:56:40.496653  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499671  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.500663  191080 out.go:179] * Verifying Kubernetes components...
	I1211 23:56:40.502382  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:40.503922  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.506960  191080 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:56:40.507005  191080 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1211 23:56:40.507060  191080 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1211 23:56:40.506993  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.507197  191080 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-081397"
	I1211 23:56:40.507613  191080 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	W1211 23:56:40.508273  191080 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:56:40.508767  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.508846  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:56:40.508884  191080 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:56:40.508983  191080 addons.go:239] Setting addon default-storageclass=true in "addons-081397"
	I1211 23:56:40.509037  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.509123  191080 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:40.509134  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:56:40.509862  191080 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:40.509879  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1211 23:56:40.510705  191080 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1211 23:56:40.510765  191080 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:40.510708  191080 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:56:40.510709  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:56:40.510780  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:56:40.510963  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:56:40.512352  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1211 23:56:40.512423  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:40.512795  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1211 23:56:40.513366  191080 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1211 23:56:40.513405  191080 out.go:179]   - Using image docker.io/registry:3.0.0
	I1211 23:56:40.513427  191080 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:40.513856  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:56:40.513452  191080 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:56:40.513569  191080 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1211 23:56:40.514419  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:56:40.514823  191080 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:56:40.515501  191080 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:40.515566  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1211 23:56:40.516012  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.516028  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:56:40.516032  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:40.516099  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:56:40.516097  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:56:40.516114  191080 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:56:40.517202  191080 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:40.517226  191080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:56:40.517560  191080 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1211 23:56:40.517676  191080 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:56:40.517948  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:40.517967  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:56:40.519009  191080 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:56:40.519029  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:56:40.519106  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:56:40.520326  191080 out.go:179]   - Using image docker.io/busybox:stable
	I1211 23:56:40.521667  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:56:40.521748  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:40.521773  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:56:40.523191  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.524446  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:56:40.524538  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525508  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.525522  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525556  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526184  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526857  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.526995  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526987  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.527300  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:56:40.526876  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528176  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528215  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528450  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.528655  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528687  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528793  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.529400  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530020  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:56:40.530078  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530252  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.530288  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531125  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531509  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.531550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.531581  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531691  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532336  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.532490  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532676  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532971  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533016  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.533392  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:56:40.533786  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.533419  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533922  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534209  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534245  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534763  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534785  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.534834  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534900  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535083  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.535167  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535342  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535606  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:56:40.535631  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:56:40.535965  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536268  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536305  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536400  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536418  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536548  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536583  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536615  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536653  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536963  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536994  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.537838  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.537879  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.538098  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.540825  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541431  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.541502  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541709  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	W1211 23:56:41.043758  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043809  191080 retry.go:31] will retry after 311.842554ms: ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	W1211 23:56:41.043894  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043909  191080 retry.go:31] will retry after 329.825082ms: ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.808354  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:56:41.808403  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:56:41.861654  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:56:41.861692  191080 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:56:41.896943  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:41.918961  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:41.924444  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:41.946144  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:42.009856  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:56:42.009896  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:56:42.018699  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:42.069883  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:42.072418  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:42.145123  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:42.186767  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:56:42.186812  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:56:42.259103  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:42.428120  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.93183404s)
	I1211 23:56:42.428248  191080 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.925817571s)
	I1211 23:56:42.428352  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:42.428498  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:56:42.452426  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:56:42.452489  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:56:42.484208  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:56:42.484275  191080 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:56:42.588545  191080 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:56:42.588585  191080 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:56:42.633670  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:56:42.633723  191080 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:56:42.637947  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:42.706175  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:56:42.706217  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:56:42.968807  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:56:42.968847  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:56:43.007497  191080 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.007532  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:56:43.028368  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:56:43.028403  191080 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:56:43.092788  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.092826  191080 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:56:43.128649  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:56:43.128687  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:56:43.289535  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:56:43.289580  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:56:43.346982  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:43.347023  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:56:43.401818  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.523249  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.586597  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:56:43.586642  191080 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:56:43.774067  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:56:43.774118  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:56:43.801000  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:44.025438  191080 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.025490  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:56:44.174620  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.277572584s)
	I1211 23:56:44.174769  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.250262195s)
	I1211 23:56:44.193708  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:56:44.193737  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:56:44.555609  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.920026  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:56:44.920060  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:56:45.697268  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:56:45.697305  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:56:46.254763  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:56:46.254799  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:56:46.581598  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:46.581642  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:56:46.687719  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:47.971016  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:56:47.975173  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976154  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:47.976199  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976614  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.491380  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:56:48.692419  191080 addons.go:239] Setting addon gcp-auth=true in "addons-081397"
	I1211 23:56:48.692544  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:48.695342  191080 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:56:48.698779  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699427  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:48.699601  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699980  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.892556  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.973548228s)
	I1211 23:56:49.408333  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.462135831s)
	I1211 23:56:49.408425  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.389664864s)
	I1211 23:56:51.938139  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.865666864s)
	I1211 23:56:51.938187  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.793007267s)
	I1211 23:56:51.938385  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.679223761s)
	I1211 23:56:51.938486  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.509912418s)
	I1211 23:56:51.938505  191080 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.510132207s)
	I1211 23:56:51.938523  191080 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:56:51.938693  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.300704152s)
	I1211 23:56:51.938740  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.868817664s)
	I1211 23:56:51.938763  191080 addons.go:495] Verifying addon ingress=true in "addons-081397"
	I1211 23:56:51.938775  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.536910017s)
	I1211 23:56:51.938799  191080 addons.go:495] Verifying addon registry=true in "addons-081397"
	I1211 23:56:51.939144  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.415830154s)
	I1211 23:56:51.939191  191080 addons.go:495] Verifying addon metrics-server=true in "addons-081397"
	I1211 23:56:51.939242  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.138197843s)
	I1211 23:56:51.939362  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.383652629s)
	W1211 23:56:51.939405  191080 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939434  191080 retry.go:31] will retry after 326.794424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939960  191080 node_ready.go:35] waiting up to 6m0s for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.941538  191080 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-081397 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:56:51.941540  191080 out.go:179] * Verifying registry addon...
	I1211 23:56:51.941553  191080 out.go:179] * Verifying ingress addon...
	I1211 23:56:51.943990  191080 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:56:51.944213  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:56:51.964791  191080 node_ready.go:49] node "addons-081397" is "Ready"
	I1211 23:56:51.964839  191080 node_ready.go:38] duration metric: took 24.813054ms for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.964861  191080 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:56:51.964931  191080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:56:52.001706  191080 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:56:52.001747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.002821  191080 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:56:52.002849  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.266441  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:52.467902  191080 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-081397" context rescaled to 1 replicas
	I1211 23:56:52.469927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.473967  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.974199  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.067246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.644012  191080 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.948623338s)
	I1211 23:56:53.644102  191080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.679150419s)
	I1211 23:56:53.644155  191080 api_server.go:72] duration metric: took 13.147840239s to wait for apiserver process to appear ...
	I1211 23:56:53.644280  191080 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:56:53.644328  191080 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I1211 23:56:53.644007  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.956173954s)
	I1211 23:56:53.644412  191080 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:53.646266  191080 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:56:53.647231  191080 out.go:179] * Verifying csi-hostpath-driver addon...
	I1211 23:56:53.648911  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:53.650424  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:56:53.650455  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:56:53.650539  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:56:53.695860  191080 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I1211 23:56:53.698147  191080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:56:53.698187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.714330  191080 api_server.go:141] control plane version: v1.34.2
	I1211 23:56:53.714403  191080 api_server.go:131] duration metric: took 70.105256ms to wait for apiserver health ...
	I1211 23:56:53.714423  191080 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:56:53.722159  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:56:53.722205  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:56:53.741176  191080 system_pods.go:59] 20 kube-system pods found
	I1211 23:56:53.741243  191080 system_pods.go:61] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.741269  191080 system_pods.go:61] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.741279  191080 system_pods.go:61] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.741289  191080 system_pods.go:61] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.741297  191080 system_pods.go:61] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.741307  191080 system_pods.go:61] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.741316  191080 system_pods.go:61] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.741323  191080 system_pods.go:61] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.741330  191080 system_pods.go:61] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.741340  191080 system_pods.go:61] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.741347  191080 system_pods.go:61] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.741358  191080 system_pods.go:61] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.741367  191080 system_pods.go:61] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.741382  191080 system_pods.go:61] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.741390  191080 system_pods.go:61] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.741401  191080 system_pods.go:61] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.741414  191080 system_pods.go:61] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.741427  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741445  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741455  191080 system_pods.go:61] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.741497  191080 system_pods.go:74] duration metric: took 27.063753ms to wait for pod list to return data ...
	I1211 23:56:53.741514  191080 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:56:53.789135  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.789157  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:56:53.793775  191080 default_sa.go:45] found service account: "default"
	I1211 23:56:53.793806  191080 default_sa.go:55] duration metric: took 52.279991ms for default service account to be created ...
	I1211 23:56:53.793821  191080 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:56:53.844257  191080 system_pods.go:86] 20 kube-system pods found
	I1211 23:56:53.844307  191080 system_pods.go:89] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.844317  191080 system_pods.go:89] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.844326  191080 system_pods.go:89] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.844334  191080 system_pods.go:89] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.844340  191080 system_pods.go:89] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.844352  191080 system_pods.go:89] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.844358  191080 system_pods.go:89] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.844364  191080 system_pods.go:89] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.844369  191080 system_pods.go:89] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.844377  191080 system_pods.go:89] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.844387  191080 system_pods.go:89] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.844394  191080 system_pods.go:89] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.844407  191080 system_pods.go:89] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.844416  191080 system_pods.go:89] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.844429  191080 system_pods.go:89] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.844439  191080 system_pods.go:89] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.844475  191080 system_pods.go:89] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.844488  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844498  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844507  191080 system_pods.go:89] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.844519  191080 system_pods.go:126] duration metric: took 50.689154ms to wait for k8s-apps to be running ...
	I1211 23:56:53.844532  191080 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:56:53.844608  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:56:53.902002  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.955676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.955845  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.160809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.448357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.453907  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.660400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.960099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.962037  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.993140  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.726594297s)
	I1211 23:56:54.993153  191080 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.148518984s)
	I1211 23:56:54.993221  191080 system_svc.go:56] duration metric: took 1.148683395s WaitForService to wait for kubelet
	I1211 23:56:54.993231  191080 kubeadm.go:587] duration metric: took 14.496919105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:56:54.993249  191080 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:56:55.001998  191080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1211 23:56:55.002046  191080 node_conditions.go:123] node cpu capacity is 2
	I1211 23:56:55.002095  191080 node_conditions.go:105] duration metric: took 8.839368ms to run NodePressure ...
	I1211 23:56:55.002114  191080 start.go:242] waiting for startup goroutines ...
	I1211 23:56:55.161169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.517092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.539796  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.579689  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.677622577s)
	I1211 23:56:55.581053  191080 addons.go:495] Verifying addon gcp-auth=true in "addons-081397"
	I1211 23:56:55.583166  191080 out.go:179] * Verifying gcp-auth addon...
	I1211 23:56:55.585775  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:56:55.610126  191080 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:56:55.610157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.684117  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.957671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.958053  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.094446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.159426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.454250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.454305  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.593123  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.698651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.955164  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.955254  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.097317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.160266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.455193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.593869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.657455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.952124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.953630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.091657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.192765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.448854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.454640  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.590861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.656664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.951563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.951970  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.092726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.156085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.453106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.455110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.594050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.659663  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.950597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.953854  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.098806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.158739  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.451426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.656305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.954392  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.957143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.089837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.157925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.451549  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.451947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.592758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.950524  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.091801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.155816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.449634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.450369  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.591242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.655088  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.952327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.952622  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.090558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.166505  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.449517  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.450499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.590638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.656141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.950487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.950653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.092052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.164233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.452727  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.453010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.590564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.658766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.956776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.960214  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.089595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.158346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.454648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.455366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.589445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.725092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.950042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.953003  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.093507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.156581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.448896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.452118  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.589736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.660370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.952602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.952699  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.093794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.159924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.452486  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.593007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.655785  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.955585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.955714  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.092772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.159691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.452421  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.453004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.596649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.657754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.151194  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.163928  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.166605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.166806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.452575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.452859  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.591132  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.658223  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.953976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.958754  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.097815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.160643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.449852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.449848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.593346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.655349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.951129  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.958386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.091038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.163797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.451681  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.455196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.594544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.665061  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.951173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.952848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.093150  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.157974  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.452312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.591441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.661703  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.958989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.960103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.089485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.156074  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.452932  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.453001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.592446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.658121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.962529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.963557  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.091969  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.158221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.449389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.450691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.594295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.659320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.949072  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.952087  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.089407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.155332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.813442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.813494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.813503  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.813799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.954853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.957241  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.091368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.157225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.462043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.465005  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.590303  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.693434  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.948523  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.948597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.090370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.155629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.450403  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.450602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.592008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.656775  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.952011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.953801  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.090174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.155951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.447617  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.448323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.590230  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.656537  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.948537  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.090670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.156440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.448193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.449148  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.589950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.655094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.949387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.950227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.155631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.448262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.449009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.655779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.952599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.952790  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.090743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.154683  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.451260  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.452256  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.593154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.656811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.109419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.111778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.111954  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.158011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.452303  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.452748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.590963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.655856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.949568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.949619  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.091094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.155741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.449880  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.449919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.590590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.658406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.948819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.949527  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.090686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.154696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.449105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.449431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.591490  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.656162  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.948671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.948867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.089628  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.157506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.448637  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.449144  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.589959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.654962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.949839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.950510  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.091561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.448681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.590622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.657217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.948184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.950039  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.089200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.155324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.449676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.449798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.590267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.655290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.948982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.090233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.155268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.448106  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.448387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.589756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.656215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.948715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.949727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.155563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.448981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.449967  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.589372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.656746  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.951190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.951266  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.089966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.156024  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.449807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:30.449940  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.592795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.655965  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.949686  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.949854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.089144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.155728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.448249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.451576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.590176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.656389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.949905  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.950451  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.090191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.156400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.449602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:32.449836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.591164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.657213  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.948520  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.948804  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.089649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.156050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.590456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.656274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.949256  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.949347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.091203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.156547  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.450354  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.450411  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:34.591349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.656156  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.948431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.948893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.089378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.156784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.450919  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:35.451766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.589587  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.656818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.949417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.950715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.090779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.155710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.452002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.452240  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.590343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.655697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.949354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.949385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.091333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.155660  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.448936  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.449075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.590116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.656050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.949528  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.950239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.156630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.449400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.449825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.590511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.655832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.948985  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.949093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.090158  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.155820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.449629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.451242  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.590400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.656829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.949106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.089281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.156612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.450580  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.590980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.655008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.949712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.949853  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.089939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.155401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.448080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.451541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.590421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.656608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.950025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.950358  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.090340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.159954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.450058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:42.450329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.589818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.655716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.948985  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.952252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.090380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.155314  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.450015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.450202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.655086  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.948401  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.949453  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.090744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.154784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:44.449642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.590645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.656686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.950021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.951009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.090020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.155822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:45.449646  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.656192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.949128  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.949580  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.091176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.155290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.448997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.450442  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:46.590802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.654435  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.949893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.950255  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.091631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.156353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.450093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.455744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.622817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.657485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.951291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.953670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.093758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.155393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.452298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.452366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.592111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.657572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.951626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.952512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.091082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.157173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.452908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.453973  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.591765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.699112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.951994  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.953086  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.090983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.162358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.452611  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.453823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.593450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.664907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.961300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.961709  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.105008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.168542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.460773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.463367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.596820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.659982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.954007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.956978  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.090564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.156735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.459306  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.461605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.591646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.659476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.949249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.949360  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.091342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.158735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.451408  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.454585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.590776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.656237  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.954524  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.954679  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.095794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.159448  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.576047  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.576308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.590001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.659406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.950589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.950691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.092084  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.157456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.451531  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.451907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:55.590653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.655648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.949374  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.953638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.090027  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.156602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.448573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:56.448625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.593728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.658937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.952879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.952929  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.091934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.159057  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.451436  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:57.455516  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.591262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.659040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.954096  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.955115  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.092045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.156829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.449510  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:58.452029  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.591835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.655523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.950729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.951027  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.091806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.192766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.450923  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.450927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:59.589799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.654677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.950001  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.950014  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.090853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.157042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.448336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:00.448337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.592094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.658087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.957344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.957336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.092515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.156002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.448332  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.450557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.590308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.655760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.948943  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.948994  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.090034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.155101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.448750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.451925  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.591378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.692860  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.948711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.949373  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.090905  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.155274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.564036  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.566077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:03.589166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.656333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.950104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.951138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.090344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.155950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.449528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:04.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.655882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.949372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.949508  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.090348  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.156443  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.449652  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.449659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.590664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.657339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.948372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.949962  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.090065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.157993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.447621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.447687  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.589658  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.656748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.950654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.952348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.090424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.154888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.449307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.449391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.655886  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.949784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.950390  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.090645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.154567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.450533  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.451325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.590268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.657358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.950295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.950733  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.091051  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.155807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.449202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.449232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.590096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.654983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.950294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.950637  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.155487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.449477  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.450235  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.592429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.655383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.950193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.951385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.090841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.154640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.448065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:11.448340  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.590017  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.656300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.950170  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.950312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.156842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.450055  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:12.451008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.590233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.656044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.950138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.950258  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.090444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.155597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.449740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.449778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.591284  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.948617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.949836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.090622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.156895  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.450176  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.589623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.656671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.950841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.951121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.090529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.155811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.449246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.449410  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.591082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.656904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.949103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.949272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.090640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.155039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.447514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.449003  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.589821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.655674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.952654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.953063  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.091612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.159499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.449631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:17.449881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.590494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.655629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.951351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.951511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.090316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.155509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.450535  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:18.451342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.591041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.655519  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.949171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.949503  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.089765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.155836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.449076  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.452236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.590791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.655570  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.949527  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.949612  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.090142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.154962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.448016  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:20.450402  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.589309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.655296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.949277  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.951681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.089881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.154879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.448360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.448858  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.589856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.655417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.949400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.949574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.090271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.155368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.449742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.450560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:22.591054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.656707  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.950712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.950890  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.091160  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.451079  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:23.451281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.590720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.654815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.950160  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.950337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.090330  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.156001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.447566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.450052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:24.591509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.656932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.949405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.449568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.450447  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.654957  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.950271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.951174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.091002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.155568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.449372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.449561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.590898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.656087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.951452  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.953541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.091542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.155995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.451595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.452488  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.591591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.657762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.949590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.952182  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.090479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.155291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.450004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.590103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.655339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.953363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.954717  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.093694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.155028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.449055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.450347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.590581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.656654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.950515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.950799  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.090326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.155485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.448572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.449692  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.590878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.655807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.956951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.957577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.092534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.155903  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.449802  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.450326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:31.593269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.656218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.949934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.091982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.155603  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.449522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.451425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:32.590687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.655082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.950545  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.950713  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.091712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.156900  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:33.451121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.592756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.655387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.956059  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.956346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.155676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:34.449255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.589931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.655778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.950791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.951042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.089716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.155182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.447641  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:35.590101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.949158  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.951312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.090687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.448272  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.448489  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.591352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.657569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.950696  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.952142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.090121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.155891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.448859  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:37.449811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.589598  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.655164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.950606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.950726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.089931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.155402  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.449956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:38.450889  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.590982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.655741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.950070  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.950118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.090737  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.156071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.448413  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:39.448760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.590316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.655228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.948192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.948232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.089574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.156012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.448864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.451601  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.592083  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.656209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.949127  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.091091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.449778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:41.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.589659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.656116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.949174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.949802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.090816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.155802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.450496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.452958  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.591015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.655595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.949982  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.091554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.155772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.451215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.451399  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:43.590489  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.655665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.949328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.950974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.092276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.155455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.449429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:44.449512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.591046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.949500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.951599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.094722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.154774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.449770  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:45.451691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.590761  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.655352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.949864  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.156103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.449181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.449779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.591976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.655596  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.949173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.950623  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.093977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.156056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.450281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:47.450897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.591849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.655891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.950318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.951578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.091959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.154872  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.450075  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.451948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.589733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.655026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.947902  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.948922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.090363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.449018  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:49.449294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.589648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.654518  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.949085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.949327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.089715  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.155336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.450276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:50.450610  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.590265  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.655617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.951287  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.155403  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.449820  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.451010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.591075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.654839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.949284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.950009  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.090582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.157494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.448608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.450368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:52.590998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.655180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.948718  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.950284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.090712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.158605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.451168  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.451536  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.589760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.657022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.948734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.951371  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.090202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.155484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.448582  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.450090  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.589620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.656268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.950155  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.950342  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.092526  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.155567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.448897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.450647  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:55.590184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.656034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.948843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.092633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.155535  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.449050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.450032  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:56.589978  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.655578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.951227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.951391  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.089968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.156011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.449111  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.449543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.591323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.656295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.949838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.950157  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.090263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.155586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.450591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:58.450796  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.590735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.655042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.948769  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.949101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.089480  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.156356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.450318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.452097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.589757  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.656038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.951264  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.955025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.093307  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.169810  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.453668  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.453747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.591664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.662082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.958327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.958678  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.093618  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.191821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.455185  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.458398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.593233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.657309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.950520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.956319  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.092841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.158368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.454368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.454386  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:02.592341  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.658118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.969970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.970262  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.091543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.193034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.478206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.494398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.601217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.659210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.956276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.961174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.090843  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.154383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.451688  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.451709  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:04.590930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.656263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.949363  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.950133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.102487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.156487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.456245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.457922  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:05.596196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.660935  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.949095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.954162  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.098801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.161484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.448923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:06.452592  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.590210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.659607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.954480  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.955630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.094202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.161252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.451546  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:07.451627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.599662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.951554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.951751  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.096946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.157724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.453200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.453207  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.592711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.695126  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.958140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.958561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.090111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.155633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.450116  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:09.595338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.656262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.950903  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.951773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.089779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.155949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.448520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.449409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.599275  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.659673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.948979  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.950560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.090875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.155105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.449575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:11.450246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.600631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.658293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.950730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.950966  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.090374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.157299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.449320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.449345  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:12.593214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.664092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.950600  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.951052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.154911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.450841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.450957  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.592263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.655883  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.948080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.948214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.089646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.157040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.448769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.449141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:14.590729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.654626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.949399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.951103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.092446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.156294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.452499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:15.452500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.590621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.657627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.951795  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.952077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.089680  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.156176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.448324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.448431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.656743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.948906  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.949692  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.091257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.155187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.450607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.450848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.589365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.655407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.948856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.949516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.091507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.155888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.451560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.452505  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:18.590970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.655165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.947845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.949425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.090758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.154844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.450275  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.451846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.589989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.655133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.950045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.950331  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.090153  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.155708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:20.591627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.655924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.947974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.948912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.089422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.155655  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.449734  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:21.589919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.657291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.949251  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.952143  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.091386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.157354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.448913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.449226  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.590529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.657745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.948933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.089214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.157137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.450670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.450902  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.590522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.656379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.950625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.091355  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.157165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.448825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.453054  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.590234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.657541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.949335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.951024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.092211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.154931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.448939  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.589597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.948738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.949046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.091849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.154651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.448387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.448440  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.590516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.656196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.949998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.950307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.090220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.156297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.450874  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:27.451092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.655496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.949431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.951612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.090350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.155161  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.448979  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.449151  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:28.589861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.655413  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.949789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.951855  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.090331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.157070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.449482  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.450006  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.590813  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.655573  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.949907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.950025  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.091458  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.158405  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.447779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.448834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:30.591091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.655875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.950684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.953289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.091332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.448823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:31.450781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.591809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.656075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.948759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.948968  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.091729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.154747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.449837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.590571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.655646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.949282  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.949595  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.090694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.155167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.451071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.591171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.656119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.949262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.949454  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.090283  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.155781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.450392  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.451683  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.591571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.655909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.949219  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.949408  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.089980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.154740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.450095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:35.450349  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.591227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.692481  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.949141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.951867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.090822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.156098  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.448722  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:36.449538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.589624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.657137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.948984  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.949366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.091350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.157094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.448182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:37.450253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.656425  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.948975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.089759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.155828  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.451552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.451647  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:38.589973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.655877  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.091390  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.405012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.452050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:39.452196  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.595044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.665344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.953209  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.953555  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.092147  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.155320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.451110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.451951  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.591316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.655931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.950017  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.951388  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.090401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.155143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.448442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.449115  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:41.591565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.656306  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.949112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.949534  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.091549  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.155830  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.449887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.450125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:42.591409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.658038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.948502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.951166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.090200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.450320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:43.450913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.592334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.656125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.948166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.949168  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.089675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.155311  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.447960  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.449667  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.592196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.655822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.952752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.952747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.155289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.448550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:45.449206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.593908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.656032  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.949589  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.949968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.089906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.156255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.448239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:46.448309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.590954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.656897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.952416  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.090308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.156374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.449653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:47.450249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.655793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.948702  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.948870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.089879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.155753  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.448357  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.450383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.590577  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.656031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.948556  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.950049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.089412  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.449163  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:49.449205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.590039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.655560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.949665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.950181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.155293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.448667  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:50.449257  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.590165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.655541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.950136  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.951139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.092122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.155044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.448983  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.449212  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.595578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.696454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.949343  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.949398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.090651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.156291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.449203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.449249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.590754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.654991  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.948372  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.948385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.091609  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.156662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.451318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:53.590507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.658421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.949207  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.949267  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.090069  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.155373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.448514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.449541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.591594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.656653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.949522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.950322  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.092501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.156404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.449073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.449090  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.590073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.662793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.950067  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.950494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.089914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.155999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.449360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.449507  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.590986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.655362  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.949305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.950892  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.090093  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.156315  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.449124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.449348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.589882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.949449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.949662  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.090373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.155727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.449522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.450610  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:58.592143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.656332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.948528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.949696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.090864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.155322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.450350  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.450722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.590799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.655636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.950576  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.950769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.090194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.156754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.449577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:00.450369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.591719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.655897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.950338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.950455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.090278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.156266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.452423  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.453174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.657914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.948554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.948798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.090601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.157198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.447995  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.448026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:02.590202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.657562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.949780  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.952121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.090613  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.155733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.449937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.449933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:03.590926  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.658062  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.949170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.949810  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.091414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.155665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.448744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.448999  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.589836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.656449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.948744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.948893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.091208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.156906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.449064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.449106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.590901  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.656845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.950206  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.950384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.090523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.155990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.449777  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.450837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.590182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.656853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.948285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.948607  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.089995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.449281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.590385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.656186  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.948484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.949766  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.090129  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.156334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.449099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:08.590056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.655353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.948870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.949837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.089440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.155221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.448572  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.449128  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.590937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.655774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.950643  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.091963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.157142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.447872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:10.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.590167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.655410  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.948681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.950881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.157845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.448987  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.451786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:11.589278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.656898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.951845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.089679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.448020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.448554  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.591529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.657808  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.949844  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.950340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.156837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.449856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:13.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.590325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.656633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.951242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.951287  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.089997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.155198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.448400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:14.448585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.590551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.656896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.951119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.090441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.155404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.451852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:15.452266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.591327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.656049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.952977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.953024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.093981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.156724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.451378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.592066  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.657270  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.948987  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.949080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.090484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.158533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.591576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.656835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.952242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.952334  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.091234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.156793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.450858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.451103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.590911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.655661  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.950840  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.091124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.154772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.449026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.451771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.590291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.657065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.951653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.089930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.156561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.448782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:20.453763  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.591804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.655268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.948366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.948454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.090508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.158196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.449441  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:21.590734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.655940  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.950169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.950328  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.089889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.157558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.449534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:22.449815  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.590378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.655963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.947894  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.948182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.090569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.156273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.450639  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.450816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.589218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.655281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.949543  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.949989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.090664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.155725  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.449198  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:24.451299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.590352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.656205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.947767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.948451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.090431  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.156379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.449358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.449672  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.654878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.949904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.950152  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.089797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.449336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.450596  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.592346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.657848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.949333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.950229  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.090752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.157107  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.449820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:27.450010  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.590514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.657927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.951547  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.952176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.156767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.451522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:28.591521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.656790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.949538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.949826  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.090055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.155834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.450097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:29.450167  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.590630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.655299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.949633  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.089708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.154762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.450366  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.655870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.948836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.948972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.089244  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.155334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.448853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.449043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.590675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.655919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.950253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.951767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.089750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.155423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.449657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:32.590358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.656797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.950269  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.090803  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.154674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.454615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.454885  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.589479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.656942  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.953188  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.954139  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.091629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:34.449071  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.589301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.656551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.948611  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.950196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.091634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.160584  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.448684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.449322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.589630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.655232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.947899  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.090521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.155599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.449031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.449382  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:36.591743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.655255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.948722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.949779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.090918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.157590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.448713  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.449843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.589677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.950644  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.093262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.156220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.448943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.450543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.591971  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.655424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.949892  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.951285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.090837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:39.450060  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.590012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.655544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.949824  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.954336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.095357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.155946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.451271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.452848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:40.590990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.655214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.963350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.967975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.092691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.157255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.461052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.464606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:41.592100  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.658218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.951346  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.953539  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.170296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.449833  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:42.449879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.589631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.655925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.952512  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.953941  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.090620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.155937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.449805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.451726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.655839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.949267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.950221  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.091825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.158335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.448909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:44.450502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.590179  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.656226  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.948916  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.950140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.089907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.156705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.449149  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.449285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.590294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.655955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.948817  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.951525  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.091170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.155968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.448814  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.450026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.590257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.655476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.950202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.950358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.091544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.156635  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.448759  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:47.450582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.591771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.655438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.951589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.951950  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.091551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.155719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.449736  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.449931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:48.590742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.656337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.951175  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.951871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.089625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.154672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.449387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:49.451177  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.589995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.655055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.947911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.948323  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.090498  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.448625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.449769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.589819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.656445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.952353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.952565  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.091804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.155910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.449736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.452867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.590141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.655015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.949047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.951778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.091793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.156022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.448369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.448494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.592499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.657185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.948673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.949634  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.092041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.157391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.451159  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.451297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.592141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.655589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.949414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.949654  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.090980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.157644  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.449578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:54.449942  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.592657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.655642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.949887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.950225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.155383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.448872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.450478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.655878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.950573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.951643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.090608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.156601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.449633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.449740  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:56.589768  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.656648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.951253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.951560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.090880  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.155285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.450738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.452046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.590263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.657500  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.950152  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.950364  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.091638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.193386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.450222  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:58.450351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.591052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.656102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.948215  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.948208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.090720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.449636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.450870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.589564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.656184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.948230  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.948326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.091313  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.155446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.449997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.590931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.655953  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.949430  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.949437  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.156208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.452056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.452238  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:01.590869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.655918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.949266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.950875  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.094697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.155278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.456102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.456104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.591508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.657972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.950365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.950787  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.155749  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.451192  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.451626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:03.592675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.657198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.949710  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.950534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.090705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.154619  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.450263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.451232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.589795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.654975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.951313  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.952632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.093185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.156889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.448891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.452008  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.589422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.655673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.954272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.955495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.090800  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.166615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.451261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.451837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:06.592681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.655679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.949675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.949685  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.156385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.449932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:07.450455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.590827  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.655109  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.950064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.090887  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.154572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.450690  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.450871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.590523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.655973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.948114  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.949750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.090989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.155955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.449016  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.449347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.590817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.656200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.950977  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.951430  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.091695  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.155672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.448805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.449149  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.591881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.655305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.948943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.949765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.089778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.156846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.450576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.451630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.591910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.657557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.949551  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.951423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.090384  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.160393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.453917  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.453927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:12.593211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.659806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.963298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.966253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.093949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.194867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.468169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:13.473451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.607669  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.664252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.963788  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.970682  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.100566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.183386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.481122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.481147  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:14.591963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.659279  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.953923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.957839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.091640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.160946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.453995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.454249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.592976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.657252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.952201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.954099  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.091133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.159988  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.451755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:16.452022  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.593102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.657191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.949980  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.950946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.091395  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.156727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.453497  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.454292  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.590745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.658023  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.953077  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.954404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.144325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.163884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.506329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:18.507416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.598801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.658864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.951533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.951768  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.091399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.157617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.453370  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.453419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:19.590356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.656750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.949694  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.952780  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.093710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.162888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.455842  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:20.457429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.597047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.658966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.952832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.956314  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.093605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.160516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.449838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.590229  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.657324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.951876  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.955993  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.093456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.156844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.452923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.453818  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.591894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.664786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.950056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.950755  191080 kapi.go:107] duration metric: took 4m31.006766325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:01:23.091356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.164794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.496726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:23.601172  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.663423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.954300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.094097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.156533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.450111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.590446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.655954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.951486  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.101144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.157114  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.459936  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.589209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.655404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.949290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.091205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.192561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.594301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.695112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.950968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.090419  191080 kapi.go:107] duration metric: took 4m31.504642831s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:01:27.092322  191080 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-081397 cluster.
	I1212 00:01:27.093973  191080 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:01:27.095595  191080 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:01:27.155630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.448192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.656676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.949602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.156035  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.452122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.656798  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.951030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.155812  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.450030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.655506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.950947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.156571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.657986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.952997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.155349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.449194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.657440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.950318  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.157071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.449726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.657033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.950261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.156773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.450869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.950125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.449419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.663651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.951541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.156031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.450253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.655842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.948990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.156446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.449076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.656334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.949221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.155204  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.448992  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.656232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.948670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.155550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.655986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.950165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.156285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.448380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.656058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.950214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.157325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.449511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.656623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.952375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.157648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.449624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.657125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.951249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.157745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.451135  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.657530  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.949771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.450113  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.950157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.156180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.450046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.655809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.950604  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.155614  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.448273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.656354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.950705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.156364  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.448416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.658552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.949651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.158180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.452700  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.656868  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.949912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.156755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.451939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.656432  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.950201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.156157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.448332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.656228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.950259  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.157269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.950248  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.156922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.449858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.658522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.950331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.158342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.452607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.657583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.952541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.156712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.452538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.656385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.949617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.154792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.450797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.655995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.950745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.155328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.448751  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.655216  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.949363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.157592  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.451921  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.664544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.958059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.156884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.449911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.659329  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.950478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.157728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.449728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.656867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.950675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.158989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.450999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.661594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.948955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.160422  191080 kapi.go:107] duration metric: took 5m6.50988483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:02:00.450671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.952269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.449529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.950781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.450250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.953623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.451822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.951054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.452684  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.952913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.449851  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.951096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.448632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.949689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.450190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.449834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.949956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.449343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.953533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.448912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.950203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.450650  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.950028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.950015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.449762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.949166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.450181  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.448817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.948583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.449804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.951493  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.450240  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.951299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.450677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.949706  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.449531  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.450374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.951339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.950909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.477937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.951049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.448664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.949615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.449359  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.949444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.450501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.450825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.948894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.950004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.450317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.949495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.456871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.948730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.449752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.950182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.450002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.948690  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.448231  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.950565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.450626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.949823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.450102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.449033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.949643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.948018  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.455139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.950450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.450566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.949291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.450245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.951396  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.451789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.949099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.450082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.954847  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.450792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.949191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.449125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.949110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.453064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.948748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.449176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.448415  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.950829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.450193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.950076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.450726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.949133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.448258  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.949440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.944788  191080 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1212 00:02:51.944836  191080 kapi.go:107] duration metric: took 6m0.000623545s to wait for kubernetes.io/minikube-addons=registry ...
	W1212 00:02:51.944978  191080 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1212 00:02:51.946936  191080 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1212 00:02:51.948508  191080 addons.go:530] duration metric: took 6m11.452163579s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass storage-provisioner inspektor-gadget cloud-spanner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1212 00:02:51.948603  191080 start.go:247] waiting for cluster config update ...
	I1212 00:02:51.948631  191080 start.go:256] writing updated cluster config ...
	I1212 00:02:51.949105  191080 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:51.959702  191080 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:51.966230  191080 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.976818  191080 pod_ready.go:94] pod "coredns-66bc5c9577-prc7f" is "Ready"
	I1212 00:02:51.976851  191080 pod_ready.go:86] duration metric: took 10.502006ms for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.982130  191080 pod_ready.go:83] waiting for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.989125  191080 pod_ready.go:94] pod "etcd-addons-081397" is "Ready"
	I1212 00:02:51.989162  191080 pod_ready.go:86] duration metric: took 7.000579ms for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.992364  191080 pod_ready.go:83] waiting for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.000110  191080 pod_ready.go:94] pod "kube-apiserver-addons-081397" is "Ready"
	I1212 00:02:52.000155  191080 pod_ready.go:86] duration metric: took 7.740136ms for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.004027  191080 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.365676  191080 pod_ready.go:94] pod "kube-controller-manager-addons-081397" is "Ready"
	I1212 00:02:52.365718  191080 pod_ready.go:86] duration metric: took 361.647196ms for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.569885  191080 pod_ready.go:83] waiting for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.966570  191080 pod_ready.go:94] pod "kube-proxy-jwqpk" is "Ready"
	I1212 00:02:52.966607  191080 pod_ready.go:86] duration metric: took 396.689665ms for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.167508  191080 pod_ready.go:83] waiting for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566695  191080 pod_ready.go:94] pod "kube-scheduler-addons-081397" is "Ready"
	I1212 00:02:53.566729  191080 pod_ready.go:86] duration metric: took 399.188237ms for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566746  191080 pod_ready.go:40] duration metric: took 1.607005753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:53.630859  191080 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:02:53.633243  191080 out.go:179] * Done! kubectl is now configured to use "addons-081397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.776156384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498014776080672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa2e7086-d816-402a-abfa-e9f64446768d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.777651432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2612b8db-020d-4962-bc5c-f5d7bb785c3e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.777774332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2612b8db-020d-4962-bc5c-f5d7bb785c3e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.783493948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash
: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAI
NER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497
387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a9
79d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daae
f29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2612b8db-020d-4962-bc5c-
f5d7bb785c3e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.833428472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69e9a88e-d2fd-4b23-b98b-c43dc3da58ff name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.833542632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69e9a88e-d2fd-4b23-b98b-c43dc3da58ff name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.835316984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b18a37bf-59e7-4b91-9bef-40ff9c2ce4bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.836473204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498014836438021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b18a37bf-59e7-4b91-9bef-40ff9c2ce4bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.837901425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6eb3916-d8ee-4209-b956-34d507b8cc09 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.838019073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6eb3916-d8ee-4209-b956-34d507b8cc09 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.839295313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash
: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAI
NER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497
387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a9
79d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daae
f29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6eb3916-d8ee-4209-b956-
34d507b8cc09 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.877747930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c531275-1ddb-4259-a6fe-7976ff7ad7fe name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.877889149Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c531275-1ddb-4259-a6fe-7976ff7ad7fe name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.880418545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57a71e52-f380-4566-b6f5-d24d12b99175 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.883476587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498014883428159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57a71e52-f380-4566-b6f5-d24d12b99175 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.885636060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a2ad02c-97b0-4409-ab74-8dd78863f830 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.885763162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a2ad02c-97b0-4409-ab74-8dd78863f830 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.886389032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash
: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAI
NER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497
387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a9
79d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daae
f29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a2ad02c-97b0-4409-ab74-
8dd78863f830 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.931088806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=802c3270-01f1-4397-ad0e-0db9a5af9360 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.931212172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=802c3270-01f1-4397-ad0e-0db9a5af9360 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.935134866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6cea10b-f9fd-4b08-bff9-bd3a93e0da1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.936608242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498014936569267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6cea10b-f9fd-4b08-bff9-bd3a93e0da1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.938695090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3261f557-5e4e-4b19-999e-0e4c9d05801d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.938920995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3261f557-5e4e-4b19-999e-0e4c9d05801d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:54 addons-081397 crio[814]: time="2025-12-12 00:06:54.939656414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash
: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAI
NER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497
387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a9
79d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daae
f29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3261f557-5e4e-4b19-999e-
0e4c9d05801d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	825fa31ff05b6       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   8c904991200ec       nginx                                       default
	25d63d362311b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   32cdf5109ec8d       busybox                                     default
	e7bfeb2a4c48e       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             5 minutes ago       Running             controller                0                   9d20100ccee45       ingress-nginx-controller-85d4c799dd-kd757   ingress-nginx
	572fd150b0b3c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   7 minutes ago       Exited              patch                     0                   f8b58397de2ae       ingress-nginx-admission-patch-7qwmp         ingress-nginx
	296863295f8c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   7 minutes ago       Exited              create                    0                   4b777612bda36       ingress-nginx-admission-create-kxmfj        ingress-nginx
	86266748a7014       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac              8 minutes ago       Running             registry-proxy            0                   4f522a691840e       registry-proxy-fdnc8                        kube-system
	c0808e7e8387c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago       Running             local-path-provisioner    0                   c1b5ac0ad6da0       local-path-provisioner-648f6765c9-fpbst     local-path-storage
	548a2242825e0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   d6d7a06f07783       kube-ingress-dns-minikube                   kube-system
	b8605d7a5d586       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf               9 minutes ago       Running             cloud-spanner-emulator    0                   de233462b342d       cloud-spanner-emulator-5bdddb765-rlznc      default
	0ee283d133145       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     9 minutes ago       Running             amd-gpu-device-plugin     0                   d6396506b4332       amd-gpu-device-plugin-djxv6                 kube-system
	636669d18a2e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             9 minutes ago       Running             storage-provisioner       0                   d4c844a547362       storage-provisioner                         kube-system
	079f9768ce55c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             10 minutes ago      Running             coredns                   0                   241bbeea7c618       coredns-66bc5c9577-prc7f                    kube-system
	7f5ed4f373cfd       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             10 minutes ago      Running             kube-proxy                0                   84c65d7d95ff4       kube-proxy-jwqpk                            kube-system
	7ace0e7fbfc94       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             10 minutes ago      Running             kube-controller-manager   0                   f75b7d32aa473       kube-controller-manager-addons-081397       kube-system
	d8612fac71b8e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             10 minutes ago      Running             kube-scheduler            0                   32069928e35e6       kube-scheduler-addons-081397                kube-system
	712e27a28f3ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             10 minutes ago      Running             kube-apiserver            0                   78928c0146bf6       kube-apiserver-addons-081397                kube-system
	f00e427bcb7fb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             10 minutes ago      Running             etcd                      0                   d442318c9ea69       etcd-addons-081397                          kube-system
	
	
	==> coredns [079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56] <==
	[INFO] 10.244.0.10:43378 - 26612 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000105831s
	[INFO] 10.244.0.10:46632 - 13171 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000319589s
	[INFO] 10.244.0.10:46632 - 20660 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000391225s
	[INFO] 10.244.0.10:46632 - 3918 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00015137s
	[INFO] 10.244.0.10:46632 - 64648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000187645s
	[INFO] 10.244.0.10:46632 - 6353 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000098783s
	[INFO] 10.244.0.10:46632 - 4585 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000128495s
	[INFO] 10.244.0.10:46632 - 8014 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000130646s
	[INFO] 10.244.0.10:46632 - 52960 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126072s
	[INFO] 10.244.0.10:54771 - 38406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000347036s
	[INFO] 10.244.0.10:54771 - 45429 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000322077s
	[INFO] 10.244.0.10:54771 - 33229 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000190686s
	[INFO] 10.244.0.10:54771 - 970 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00106917s
	[INFO] 10.244.0.10:54771 - 33077 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000147872s
	[INFO] 10.244.0.10:54771 - 10775 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000510757s
	[INFO] 10.244.0.10:54771 - 52503 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000156327s
	[INFO] 10.244.0.10:54771 - 32676 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000281352s
	[INFO] 10.244.0.10:53374 - 58934 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00018807s
	[INFO] 10.244.0.10:53374 - 31252 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000250795s
	[INFO] 10.244.0.10:53374 - 11275 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000284682s
	[INFO] 10.244.0.10:53374 - 14820 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000283983s
	[INFO] 10.244.0.10:53374 - 51234 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000105101s
	[INFO] 10.244.0.10:53374 - 49286 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000116196s
	[INFO] 10.244.0.10:53374 - 34512 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116833s
	[INFO] 10.244.0.10:53374 - 46679 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00011131s
	
	
	==> describe nodes <==
	Name:               addons-081397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-081397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-081397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-081397
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-081397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:06:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-081397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	System Info:
	  Machine ID:                 132f08c043de4a3fabcb9cf58535d902
	  System UUID:                132f08c0-43de-4a3f-abcb-9cf58535d902
	  Boot ID:                    7a0deef8-e8c7-4912-a254-b2bd4a5f2873
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  default                     cloud-spanner-emulator-5bdddb765-rlznc       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-5d498dc89-gqw57              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-kd757    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-djxv6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-prc7f                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-081397                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-081397                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-081397        200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-jwqpk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-081397                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-6b586f9694-f9q5b                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-creds-764b6fb674-fn77c              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-proxy-fdnc8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-648f6765c9-fpbst      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-081397 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-081397 event: Registered Node addons-081397 in Controller
	
	
	==> dmesg <==
	[Dec11 23:59] kauditd_printk_skb: 101 callbacks suppressed
	[  +4.064218] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.976500] kauditd_printk_skb: 88 callbacks suppressed
	[ +28.653620] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:00] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.217370] kauditd_printk_skb: 65 callbacks suppressed
	[  +8.838372] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.897990] kauditd_printk_skb: 38 callbacks suppressed
	[ +21.981224] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 00:02] kauditd_printk_skb: 20 callbacks suppressed
	[Dec12 00:03] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.025958] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.337280] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.693930] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.887520] kauditd_printk_skb: 43 callbacks suppressed
	[  +1.616504] kauditd_printk_skb: 83 callbacks suppressed
	[Dec12 00:04] kauditd_printk_skb: 89 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.912354] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.453375] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 11 callbacks suppressed
	[Dec12 00:06] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.863558] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.805188] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529] <==
	{"level":"info","ts":"2025-12-11T23:57:54.563834Z","caller":"traceutil/trace.go:172","msg":"trace[62183190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.045886ms","start":"2025-12-11T23:57:54.444782Z","end":"2025-12-11T23:57:54.563827Z","steps":["trace[62183190] 'agreement among raft nodes before linearized reading'  (duration: 118.982426ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:57:54.564190Z","caller":"traceutil/trace.go:172","msg":"trace[2039299796] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"179.02635ms","start":"2025-12-11T23:57:54.385155Z","end":"2025-12-11T23:57:54.564182Z","steps":["trace[2039299796] 'process raft request'  (duration: 178.524709ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:57:54.565247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.428918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:57:54.565413Z","caller":"traceutil/trace.go:172","msg":"trace[222868242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.534642ms","start":"2025-12-11T23:57:54.445807Z","end":"2025-12-11T23:57:54.565342Z","steps":["trace[222868242] 'agreement among raft nodes before linearized reading'  (duration: 119.367809ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.552158Z","caller":"traceutil/trace.go:172","msg":"trace[1638119342] linearizableReadLoop","detail":"{readStateIndex:1095; appliedIndex:1096; }","duration":"156.418496ms","start":"2025-12-11T23:58:03.395726Z","end":"2025-12-11T23:58:03.552144Z","steps":["trace[1638119342] 'read index received'  (duration: 156.415444ms)","trace[1638119342] 'applied index is now lower than readState.Index'  (duration: 2.503µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-11T23:58:03.552301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.56477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.552320Z","caller":"traceutil/trace.go:172","msg":"trace[928892129] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1059; }","duration":"156.592939ms","start":"2025-12-11T23:58:03.395722Z","end":"2025-12-11T23:58:03.552315Z","steps":["trace[928892129] 'agreement among raft nodes before linearized reading'  (duration: 156.542706ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.554244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.397714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.555824Z","caller":"traceutil/trace.go:172","msg":"trace[949728136] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1059; }","duration":"111.983139ms","start":"2025-12-11T23:58:03.443830Z","end":"2025-12-11T23:58:03.555813Z","steps":["trace[949728136] 'agreement among raft nodes before linearized reading'  (duration: 110.370385ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.554796Z","caller":"traceutil/trace.go:172","msg":"trace[1547687040] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"112.058069ms","start":"2025-12-11T23:58:03.442727Z","end":"2025-12-11T23:58:03.554786Z","steps":["trace[1547687040] 'process raft request'  (duration: 111.966352ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.555039Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.923532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.556516Z","caller":"traceutil/trace.go:172","msg":"trace[1507526217] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"113.405565ms","start":"2025-12-11T23:58:03.443103Z","end":"2025-12-11T23:58:03.556508Z","steps":["trace[1507526217] 'agreement among raft nodes before linearized reading'  (duration: 111.826397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393302Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.001392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-12-11T23:59:39.393692Z","caller":"traceutil/trace.go:172","msg":"trace[1235881685] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1239; }","duration":"171.464156ms","start":"2025-12-11T23:59:39.222198Z","end":"2025-12-11T23:59:39.393662Z","steps":["trace[1235881685] 'range keys from in-memory index tree'  (duration: 170.767828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.832598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:59:39.393801Z","caller":"traceutil/trace.go:172","msg":"trace[1862727742] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1239; }","duration":"240.918211ms","start":"2025-12-11T23:59:39.152870Z","end":"2025-12-11T23:59:39.393789Z","steps":["trace[1862727742] 'range keys from in-memory index tree'  (duration: 240.669473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:01:18.494075Z","caller":"traceutil/trace.go:172","msg":"trace[729500464] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"106.783316ms","start":"2025-12-12T00:01:18.387266Z","end":"2025-12-12T00:01:18.494049Z","steps":["trace[729500464] 'process raft request'  (duration: 106.410306ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:02:27.300606Z","caller":"traceutil/trace.go:172","msg":"trace[636598247] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"178.765669ms","start":"2025-12-12T00:02:27.121805Z","end":"2025-12-12T00:02:27.300571Z","steps":["trace[636598247] 'process raft request'  (duration: 178.598198ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302340Z","caller":"traceutil/trace.go:172","msg":"trace[1017299845] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1944; }","duration":"211.137553ms","start":"2025-12-12T00:03:50.091151Z","end":"2025-12-12T00:03:50.302289Z","steps":["trace[1017299845] 'read index received'  (duration: 211.129428ms)","trace[1017299845] 'applied index is now lower than readState.Index'  (duration: 7.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:03:50.302716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.444735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:03:50.302750Z","caller":"traceutil/trace.go:172","msg":"trace[412680698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1831; }","duration":"211.595679ms","start":"2025-12-12T00:03:50.091146Z","end":"2025-12-12T00:03:50.302742Z","steps":["trace[412680698] 'agreement among raft nodes before linearized reading'  (duration: 211.378448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302919Z","caller":"traceutil/trace.go:172","msg":"trace[333147361] transaction","detail":"{read_only:false; response_revision:1832; number_of_response:1; }","duration":"278.806483ms","start":"2025-12-12T00:03:50.024100Z","end":"2025-12-12T00:03:50.302907Z","steps":["trace[333147361] 'process raft request'  (duration: 278.330678ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:06:28.833586Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1445}
	{"level":"info","ts":"2025-12-12T00:06:28.938406Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1445,"took":"103.595743ms","hash":3397286304,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4149248,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-12-12T00:06:28.938497Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3397286304,"revision":1445,"compact-revision":-1}
	
	
	==> kernel <==
	 00:06:55 up 10 min,  0 users,  load average: 1.37, 1.28, 0.92
	Linux addons-081397 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0] <==
	E1211 23:57:51.421606       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:51.423760       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	I1211 23:57:51.587823       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:03:28.639031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47840: use of closed network connection
	E1212 00:03:28.907630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47866: use of closed network connection
	I1212 00:03:38.672372       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.240.212"}
	I1212 00:03:52.460336       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:03:56.689684       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:03:56.942418       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.190.115"}
	I1212 00:04:08.084581       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 00:04:26.464334       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.464735       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.589812       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.589919       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.683731       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.683804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.703860       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.704083       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.747552       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.747633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 00:04:27.684485       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 00:04:27.749684       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1212 00:04:27.811321       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1212 00:06:30.996030       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:06:53.209224       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.181.204"}
	
	
	==> kube-controller-manager [7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1] <==
	E1212 00:04:46.383547       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:46.384898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:04:48.248271       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:48.249716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:04:58.780298       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:58.782292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:00.148817       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:00.150772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:08.432389       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:08.433992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:28.580816       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:28.581925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:42.387829       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:42.393861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:56.480899       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:56.482116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:18.044047       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:18.045726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:28.574687       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:28.576064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:38.552726       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:38.554143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1212 00:06:48.988911       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E1212 00:06:49.400063       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:49.401478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a] <==
	I1211 23:56:41.129554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 23:56:41.230792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 23:56:41.230832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.2"]
	E1211 23:56:41.230926       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:56:41.372420       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1211 23:56:41.372474       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:56:41.372505       1 server_linux.go:132] "Using iptables Proxier"
	I1211 23:56:41.403791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:56:41.404681       1 server.go:527] "Version info" version="v1.34.2"
	I1211 23:56:41.404798       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:56:41.409627       1 config.go:200] "Starting service config controller"
	I1211 23:56:41.409659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 23:56:41.409674       1 config.go:106] "Starting endpoint slice config controller"
	I1211 23:56:41.409677       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 23:56:41.409687       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 23:56:41.409690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 23:56:41.421538       1 config.go:309] "Starting node config controller"
	I1211 23:56:41.421577       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 23:56:41.421584       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 23:56:41.510201       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 23:56:41.510238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 23:56:41.510294       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379] <==
	E1211 23:56:31.058088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:31.058174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:31.058339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.058520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:31.058583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.878612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1211 23:56:31.916337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.929867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.934421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:31.956823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1211 23:56:31.994674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:32.004329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1211 23:56:32.010178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:32.026980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1211 23:56:32.052788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1211 23:56:32.154842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1211 23:56:32.220469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:32.267618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1211 23:56:32.308064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1211 23:56:32.344466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1211 23:56:32.371737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1211 23:56:32.397888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:32.548714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1211 23:56:32.628885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1211 23:56:34.946153       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:06:46 addons-081397 kubelet[1522]: I1212 00:06:46.910921    1522 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgjj5\" (UniqueName: \"kubernetes.io/projected/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-kube-api-access-rgjj5\") pod \"d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc\" (UID: \"d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc\") "
	Dec 12 00:06:46 addons-081397 kubelet[1522]: I1212 00:06:46.911873    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-data" (OuterVolumeSpecName: "data") pod "d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc" (UID: "d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 12 00:06:46 addons-081397 kubelet[1522]: I1212 00:06:46.912523    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-script" (OuterVolumeSpecName: "script") pod "d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc" (UID: "d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 12 00:06:46 addons-081397 kubelet[1522]: I1212 00:06:46.913604    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-kube-api-access-rgjj5" (OuterVolumeSpecName: "kube-api-access-rgjj5") pod "d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc" (UID: "d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc"). InnerVolumeSpecName "kube-api-access-rgjj5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 00:06:47 addons-081397 kubelet[1522]: I1212 00:06:47.012178    1522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rgjj5\" (UniqueName: \"kubernetes.io/projected/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-kube-api-access-rgjj5\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:06:47 addons-081397 kubelet[1522]: I1212 00:06:47.012263    1522 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-data\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:06:47 addons-081397 kubelet[1522]: I1212 00:06:47.012274    1522 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc-script\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:06:48 addons-081397 kubelet[1522]: I1212 00:06:48.546668    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-djxv6" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:48 addons-081397 kubelet[1522]: I1212 00:06:48.555284    1522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc" path="/var/lib/kubelet/pods/d35b85ef-ec1f-4a5c-bfbb-abc48c1e47cc/volumes"
	Dec 12 00:06:50 addons-081397 kubelet[1522]: I1212 00:06:50.547287    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-f9q5b" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:50 addons-081397 kubelet[1522]: E1212 00:06:50.550850    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-f9q5b" podUID="96c372a4-ae7e-4df5-9a48-525fc42f8bc5"
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.145292    1522 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/22649f4f-f712-4939-86ae-d4e2f87acc0a-device-plugin\") pod \"22649f4f-f712-4939-86ae-d4e2f87acc0a\" (UID: \"22649f4f-f712-4939-86ae-d4e2f87acc0a\") "
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.145380    1522 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2n5\" (UniqueName: \"kubernetes.io/projected/22649f4f-f712-4939-86ae-d4e2f87acc0a-kube-api-access-vd2n5\") pod \"22649f4f-f712-4939-86ae-d4e2f87acc0a\" (UID: \"22649f4f-f712-4939-86ae-d4e2f87acc0a\") "
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.145545    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22649f4f-f712-4939-86ae-d4e2f87acc0a-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "22649f4f-f712-4939-86ae-d4e2f87acc0a" (UID: "22649f4f-f712-4939-86ae-d4e2f87acc0a"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.149885    1522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22649f4f-f712-4939-86ae-d4e2f87acc0a-kube-api-access-vd2n5" (OuterVolumeSpecName: "kube-api-access-vd2n5") pod "22649f4f-f712-4939-86ae-d4e2f87acc0a" (UID: "22649f4f-f712-4939-86ae-d4e2f87acc0a"). InnerVolumeSpecName "kube-api-access-vd2n5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.246402    1522 reconciler_common.go:299] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/22649f4f-f712-4939-86ae-d4e2f87acc0a-device-plugin\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.246444    1522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vd2n5\" (UniqueName: \"kubernetes.io/projected/22649f4f-f712-4939-86ae-d4e2f87acc0a-kube-api-access-vd2n5\") on node \"addons-081397\" DevicePath \"\""
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.707090    1522 scope.go:117] "RemoveContainer" containerID="18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5"
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.845665    1522 scope.go:117] "RemoveContainer" containerID="18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5"
	Dec 12 00:06:51 addons-081397 kubelet[1522]: E1212 00:06:51.846851    1522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5\": container with ID starting with 18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5 not found: ID does not exist" containerID="18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5"
	Dec 12 00:06:51 addons-081397 kubelet[1522]: I1212 00:06:51.847119    1522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5"} err="failed to get container status \"18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5\": rpc error: code = NotFound desc = could not find container \"18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5\": container with ID starting with 18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5 not found: ID does not exist"
	Dec 12 00:06:52 addons-081397 kubelet[1522]: I1212 00:06:52.554932    1522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22649f4f-f712-4939-86ae-d4e2f87acc0a" path="/var/lib/kubelet/pods/22649f4f-f712-4939-86ae-d4e2f87acc0a/volumes"
	Dec 12 00:06:53 addons-081397 kubelet[1522]: I1212 00:06:53.160590    1522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5gl\" (UniqueName: \"kubernetes.io/projected/fa636fe7-3020-41c2-8bcf-0efb5485419e-kube-api-access-rk5gl\") pod \"hello-world-app-5d498dc89-gqw57\" (UID: \"fa636fe7-3020-41c2-8bcf-0efb5485419e\") " pod="default/hello-world-app-5d498dc89-gqw57"
	Dec 12 00:06:55 addons-081397 kubelet[1522]: E1212 00:06:55.224199    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498015222910986 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:55 addons-081397 kubelet[1522]: E1212 00:06:55.224230    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498015222910986 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	
	
	==> storage-provisioner [636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529] <==
	W1212 00:06:29.697252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:31.701828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:31.714916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:33.721914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:33.729287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:35.735254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:35.747465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:37.753184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:37.768689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:39.774096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:39.781493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:41.785507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:41.794678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:43.804868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:43.815261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:45.820035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:45.828068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:47.833860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:47.841548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:49.847365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:49.856213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:51.862140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:51.869092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:53.875597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:53.893281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-081397 -n addons-081397
helpers_test.go:270: (dbg) Run:  kubectl --context addons-081397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-gqw57 test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c: exit status 1 (207.88821ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-gqw57
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-081397/192.168.39.2
	Start Time:       Fri, 12 Dec 2025 00:06:53 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk5gl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk5gl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-gqw57 to addons-081397
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjvf5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sjvf5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kxmfj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7qwmp" not found
	Error from server (NotFound): pods "registry-6b586f9694-f9q5b" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-fn77c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable ingress-dns --alsologtostderr -v=1: (1.260344544s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable ingress --alsologtostderr -v=1: (8.194567684s)
--- FAIL: TestAddons/parallel/Ingress (189.88s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (428.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-081397 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-081397 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Non-zero exit: kubectl --context addons-081397 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (2.928µs)
helpers_test.go:405: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:962: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-081397 -n addons-081397
helpers_test.go:253: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 logs -n 25: (1.725119529s)
helpers_test.go:261: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-525167                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-449217                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-859495                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ --download-only -p binary-mirror-928519 --alsologtostderr --binary-mirror http://127.0.0.1:46143 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p binary-mirror-928519                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ addons  │ enable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ start   │ -p addons-081397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:02 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ enable headlamp -p addons-081397 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                         │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ ssh     │ addons-081397 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │                     │
	│ addons  │ addons-081397 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ip      │ addons-081397 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ addons  │ addons-081397 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:51.508824  191080 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:51.508961  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.508968  191080 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:51.508973  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.509212  191080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1211 23:55:51.509810  191080 out.go:368] Setting JSON to false
	I1211 23:55:51.510832  191080 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20296,"bootTime":1765477056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:51.510906  191080 start.go:143] virtualization: kvm guest
	I1211 23:55:51.512916  191080 out.go:179] * [addons-081397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:51.514286  191080 notify.go:221] Checking for updates...
	I1211 23:55:51.514305  191080 out.go:179]   - MINIKUBE_LOCATION=22101
	I1211 23:55:51.515624  191080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:51.517281  191080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:51.518706  191080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.520288  191080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:55:51.521862  191080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:55:51.523574  191080 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:51.556952  191080 out.go:179] * Using the kvm2 driver based on user configuration
	I1211 23:55:51.558571  191080 start.go:309] selected driver: kvm2
	I1211 23:55:51.558600  191080 start.go:927] validating driver "kvm2" against <nil>
	I1211 23:55:51.558629  191080 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:55:51.559389  191080 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:51.559736  191080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:55:51.559767  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:55:51.559823  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:55:51.559835  191080 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:51.559888  191080 start.go:353] cluster config:
	{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1211 23:55:51.560015  191080 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:51.561727  191080 out.go:179] * Starting "addons-081397" primary control-plane node in "addons-081397" cluster
	I1211 23:55:51.563063  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:51.563108  191080 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:51.563116  191080 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:51.563256  191080 preload.go:238] Found /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:55:51.563274  191080 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 23:55:51.563705  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:55:51.563732  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json: {Name:mk3f56184a595aa65236de2721f264b9d77bbfd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:55:51.563928  191080 start.go:360] acquireMachinesLock for addons-081397: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:55:51.564001  191080 start.go:364] duration metric: took 52.499µs to acquireMachinesLock for "addons-081397"
	I1211 23:55:51.564027  191080 start.go:93] Provisioning new machine with config: &{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:55:51.564111  191080 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:55:51.566772  191080 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1211 23:55:51.567024  191080 start.go:159] libmachine.API.Create for "addons-081397" (driver="kvm2")
	I1211 23:55:51.567078  191080 client.go:173] LocalClient.Create starting
	I1211 23:55:51.567214  191080 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem
	I1211 23:55:51.634646  191080 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem
	I1211 23:55:51.761850  191080 main.go:143] libmachine: creating domain...
	I1211 23:55:51.761879  191080 main.go:143] libmachine: creating network...
	I1211 23:55:51.763511  191080 main.go:143] libmachine: found existing default network
	I1211 23:55:51.763716  191080 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.764419  191080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dae890}
	I1211 23:55:51.764553  191080 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-081397</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.771343  191080 main.go:143] libmachine: creating private network mk-addons-081397 192.168.39.0/24...
	I1211 23:55:51.876571  191080 main.go:143] libmachine: private network mk-addons-081397 192.168.39.0/24 created
	I1211 23:55:51.876999  191080 main.go:143] libmachine: <network>
	  <name>mk-addons-081397</name>
	  <uuid>f81ed5cb-0804-4477-9781-0372afa282e4</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:59:29:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.877044  191080 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:51.877068  191080 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1211 23:55:51.877078  191080 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.877153  191080 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22101-186349/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1211 23:55:52.159080  191080 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa...
	I1211 23:55:52.239938  191080 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk...
	I1211 23:55:52.239993  191080 main.go:143] libmachine: Writing magic tar header
	I1211 23:55:52.240026  191080 main.go:143] libmachine: Writing SSH key tar header
	I1211 23:55:52.240106  191080 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:52.240169  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397
	I1211 23:55:52.240206  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 (perms=drwx------)
	I1211 23:55:52.240215  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines
	I1211 23:55:52.240224  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:55:52.240232  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:52.240240  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube (perms=drwxr-xr-x)
	I1211 23:55:52.240250  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349
	I1211 23:55:52.240258  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349 (perms=drwxrwxr-x)
	I1211 23:55:52.240268  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:55:52.240275  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:55:52.240283  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1211 23:55:52.240291  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:55:52.240299  191080 main.go:143] libmachine: checking permissions on dir: /home
	I1211 23:55:52.240306  191080 main.go:143] libmachine: skipping /home - not owner
	I1211 23:55:52.240309  191080 main.go:143] libmachine: defining domain...
	I1211 23:55:52.242720  191080 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:52.249320  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:07:bd:c2 in network default
	I1211 23:55:52.250641  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:52.250680  191080 main.go:143] libmachine: starting domain...
	I1211 23:55:52.250686  191080 main.go:143] libmachine: ensuring networks are active...
	I1211 23:55:52.252166  191080 main.go:143] libmachine: Ensuring network default is active
	I1211 23:55:52.253166  191080 main.go:143] libmachine: Ensuring network mk-addons-081397 is active
	I1211 23:55:52.254226  191080 main.go:143] libmachine: getting domain XML...
	I1211 23:55:52.255944  191080 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <uuid>132f08c0-43de-4a3f-abcb-9cf58535d902</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2b:32:89'/>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:07:bd:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:53.688550  191080 main.go:143] libmachine: waiting for domain to start...
	I1211 23:55:53.691114  191080 main.go:143] libmachine: domain is now running
	I1211 23:55:53.691144  191080 main.go:143] libmachine: waiting for IP...
	I1211 23:55:53.692424  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.693801  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.693826  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.694334  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.694402  191080 retry.go:31] will retry after 260.574844ms: waiting for domain to come up
	I1211 23:55:53.957397  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.958627  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.958657  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.959170  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.959230  191080 retry.go:31] will retry after 343.725464ms: waiting for domain to come up
	I1211 23:55:54.305232  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.306166  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.306193  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.306730  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.306782  191080 retry.go:31] will retry after 478.083756ms: waiting for domain to come up
	I1211 23:55:54.787051  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.788263  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.788294  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.788968  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.789021  191080 retry.go:31] will retry after 586.83961ms: waiting for domain to come up
	I1211 23:55:55.378616  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:55.379761  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:55.379794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:55.380438  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:55.380514  191080 retry.go:31] will retry after 629.739442ms: waiting for domain to come up
	I1211 23:55:56.011678  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.012771  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.012794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.013869  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.013951  191080 retry.go:31] will retry after 838.290437ms: waiting for domain to come up
	I1211 23:55:56.853752  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.854450  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.854485  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.854918  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.854979  191080 retry.go:31] will retry after 1.020736825s: waiting for domain to come up
	I1211 23:55:57.877350  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:57.878104  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:57.878134  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:57.878522  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:57.878563  191080 retry.go:31] will retry after 1.394206578s: waiting for domain to come up
	I1211 23:55:59.275153  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:59.276377  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:59.276409  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:59.276994  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:59.277049  191080 retry.go:31] will retry after 1.4774988s: waiting for domain to come up
	I1211 23:56:00.757189  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:00.758049  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:00.758071  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:00.758450  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:00.758518  191080 retry.go:31] will retry after 1.704024367s: waiting for domain to come up
	I1211 23:56:02.464578  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:02.465672  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:02.465713  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:02.466390  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:02.466496  191080 retry.go:31] will retry after 2.558039009s: waiting for domain to come up
	I1211 23:56:05.028156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:05.029424  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:05.029476  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:05.030141  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:05.030218  191080 retry.go:31] will retry after 2.713185396s: waiting for domain to come up
	I1211 23:56:07.745837  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:07.746810  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:07.746835  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:07.747308  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:07.747359  191080 retry.go:31] will retry after 3.017005916s: waiting for domain to come up
	I1211 23:56:10.768106  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769156  191080 main.go:143] libmachine: domain addons-081397 has current primary IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769185  191080 main.go:143] libmachine: found domain IP: 192.168.39.2
	I1211 23:56:10.769196  191080 main.go:143] libmachine: reserving static IP address...
	I1211 23:56:10.769843  191080 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-081397", mac: "52:54:00:2b:32:89", ip: "192.168.39.2"} in network mk-addons-081397
	I1211 23:56:11.003302  191080 main.go:143] libmachine: reserved static IP address 192.168.39.2 for domain addons-081397
	I1211 23:56:11.003331  191080 main.go:143] libmachine: waiting for SSH...
	I1211 23:56:11.003337  191080 main.go:143] libmachine: Getting to WaitForSSH function...
	I1211 23:56:11.008569  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009090  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.009115  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009350  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.009619  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.009631  191080 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1211 23:56:11.126360  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.126895  191080 main.go:143] libmachine: domain creation complete
	I1211 23:56:11.129784  191080 machine.go:94] provisionDockerMachine start ...
	I1211 23:56:11.134589  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.135537  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.135574  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.136010  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.136277  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.136290  191080 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 23:56:11.257254  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1211 23:56:11.257302  191080 buildroot.go:166] provisioning hostname "addons-081397"
	I1211 23:56:11.261573  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262389  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.262457  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262926  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.263212  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.263234  191080 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-081397 && echo "addons-081397" | sudo tee /etc/hostname
	I1211 23:56:11.410142  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-081397
	
	I1211 23:56:11.414271  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.414882  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.414917  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.415210  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.415441  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.415482  191080 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-081397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-081397/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-081397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:56:11.555358  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.555395  191080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22101-186349/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-186349/.minikube}
	I1211 23:56:11.555420  191080 buildroot.go:174] setting up certificates
	I1211 23:56:11.555443  191080 provision.go:84] configureAuth start
	I1211 23:56:11.558885  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.559509  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.559565  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.562716  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563314  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.563346  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563750  191080 provision.go:143] copyHostCerts
	I1211 23:56:11.563901  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem (1123 bytes)
	I1211 23:56:11.564087  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem (1675 bytes)
	I1211 23:56:11.564163  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem (1082 bytes)
	I1211 23:56:11.564231  191080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem org=jenkins.addons-081397 san=[127.0.0.1 192.168.39.2 addons-081397 localhost minikube]
	I1211 23:56:11.604096  191080 provision.go:177] copyRemoteCerts
	I1211 23:56:11.604171  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:56:11.607337  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.607977  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.608015  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.608218  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:11.699591  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:56:11.739646  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:56:11.780870  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:56:11.821711  191080 provision.go:87] duration metric: took 266.231617ms to configureAuth
	I1211 23:56:11.821755  191080 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:56:11.822007  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:11.826045  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.826578  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826785  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.827068  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.827088  191080 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:56:12.345303  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:56:12.345334  191080 machine.go:97] duration metric: took 1.2155135s to provisionDockerMachine
	I1211 23:56:12.345348  191080 client.go:176] duration metric: took 20.778259004s to LocalClient.Create
	I1211 23:56:12.345369  191080 start.go:167] duration metric: took 20.77834555s to libmachine.API.Create "addons-081397"
	I1211 23:56:12.345379  191080 start.go:293] postStartSetup for "addons-081397" (driver="kvm2")
	I1211 23:56:12.345393  191080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:56:12.345498  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:56:12.350156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351165  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.351226  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351544  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.444149  191080 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:56:12.450354  191080 info.go:137] Remote host: Buildroot 2025.02
	I1211 23:56:12.450386  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/addons for local assets ...
	I1211 23:56:12.450452  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/files for local assets ...
	I1211 23:56:12.450508  191080 start.go:296] duration metric: took 105.122285ms for postStartSetup
	I1211 23:56:12.489061  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.489811  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.489855  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.490235  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:56:12.490597  191080 start.go:128] duration metric: took 20.9264692s to createHost
	I1211 23:56:12.493999  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494451  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.494490  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494674  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:12.494897  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:12.494909  191080 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:56:12.615405  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765497372.576443288
	
	I1211 23:56:12.615439  191080 fix.go:216] guest clock: 1765497372.576443288
	I1211 23:56:12.615447  191080 fix.go:229] Guest: 2025-12-11 23:56:12.576443288 +0000 UTC Remote: 2025-12-11 23:56:12.490625673 +0000 UTC m=+21.040527790 (delta=85.817615ms)
	I1211 23:56:12.615500  191080 fix.go:200] guest clock delta is within tolerance: 85.817615ms
	I1211 23:56:12.615508  191080 start.go:83] releasing machines lock for "addons-081397", held for 21.051491664s
	I1211 23:56:12.619172  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.619799  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.619831  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.620772  191080 ssh_runner.go:195] Run: cat /version.json
	I1211 23:56:12.620876  191080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:56:12.625375  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.625530  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626036  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626063  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626330  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626345  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.626381  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626618  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.717381  191080 ssh_runner.go:195] Run: systemctl --version
	I1211 23:56:12.749852  191080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:56:13.078529  191080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:56:13.088885  191080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:56:13.089007  191080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:56:13.118717  191080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:56:13.118763  191080 start.go:496] detecting cgroup driver to use...
	I1211 23:56:13.118864  191080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:56:13.148400  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:56:13.169798  191080 docker.go:218] disabling cri-docker service (if available) ...
	I1211 23:56:13.169888  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:56:13.191896  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:56:13.211802  191080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:56:13.376765  191080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:56:13.606305  191080 docker.go:234] disabling docker service ...
	I1211 23:56:13.606403  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:56:13.625180  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:56:13.643232  191080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:56:13.829218  191080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:56:14.000354  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:56:14.021612  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:56:14.050867  191080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 23:56:14.050963  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.068612  191080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:56:14.068701  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.086254  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.104697  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.123074  191080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:56:14.143227  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.161079  191080 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.188908  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.207821  191080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:56:14.223124  191080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:56:14.223216  191080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:56:14.252980  191080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:56:14.270522  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:14.430888  191080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:56:14.564516  191080 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:56:14.564671  191080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:56:14.574658  191080 start.go:564] Will wait 60s for crictl version
	I1211 23:56:14.574811  191080 ssh_runner.go:195] Run: which crictl
	I1211 23:56:14.580945  191080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:56:14.633033  191080 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:56:14.633155  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.669436  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.710252  191080 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1211 23:56:14.715883  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716478  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:14.716519  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716765  191080 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:56:14.724237  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:14.744504  191080 kubeadm.go:884] updating cluster {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:56:14.744646  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:56:14.744696  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:14.782232  191080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1211 23:56:14.782317  191080 ssh_runner.go:195] Run: which lz4
	I1211 23:56:14.788630  191080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:56:14.795116  191080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:56:14.795159  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1211 23:56:16.445424  191080 crio.go:462] duration metric: took 1.656827131s to copy over tarball
	I1211 23:56:16.445532  191080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:56:18.102205  191080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.656625041s)
	I1211 23:56:18.102245  191080 crio.go:469] duration metric: took 1.656768065s to extract the tarball
	I1211 23:56:18.102258  191080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:56:18.141443  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:18.189200  191080 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:18.189229  191080 cache_images.go:86] Images are preloaded, skipping loading
	I1211 23:56:18.189239  191080 kubeadm.go:935] updating node { 192.168.39.2 8443 v1.34.2 crio true true} ...
	I1211 23:56:18.189344  191080 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-081397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:56:18.189436  191080 ssh_runner.go:195] Run: crio config
	I1211 23:56:18.243325  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:18.243368  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:18.243392  191080 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 23:56:18.243429  191080 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-081397 NodeName:addons-081397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:56:18.243664  191080 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-081397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:56:18.243802  191080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 23:56:18.259378  191080 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 23:56:18.259504  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:56:18.274263  191080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1211 23:56:18.301193  191080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:56:18.326928  191080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1211 23:56:18.352300  191080 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:56:18.358187  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:18.378953  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:18.546541  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:18.581301  191080 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397 for IP: 192.168.39.2
	I1211 23:56:18.581326  191080 certs.go:195] generating shared ca certs ...
	I1211 23:56:18.581346  191080 certs.go:227] acquiring lock for ca certs: {Name:mkdc58adfd2cc299a76aeec81ac0d7f7d2a38e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.581537  191080 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key
	I1211 23:56:18.667363  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt ...
	I1211 23:56:18.667401  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt: {Name:mk1b55f33c9202ab57b68cfcba7feed18a5c869b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667594  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key ...
	I1211 23:56:18.667607  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key: {Name:mk31aac21dc0da02b77cc3d7268007e3ddde417b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667688  191080 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key
	I1211 23:56:18.787173  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt ...
	I1211 23:56:18.787207  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt: {Name:mk50e6f78e87c39b691065db3fbc22d4178cbab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787389  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key ...
	I1211 23:56:18.787400  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key: {Name:mk3201307c9797e697c52cf7944b78460ad79885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787484  191080 certs.go:257] generating profile certs ...
	I1211 23:56:18.787545  191080 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key
	I1211 23:56:18.787567  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt with IP's: []
	I1211 23:56:18.836629  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt ...
	I1211 23:56:18.836666  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: {Name:mk4cd9c65ec1631677a6989710916cca92666039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.836848  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key ...
	I1211 23:56:18.836869  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key: {Name:mk158319f878ba2a2974fa05c9c5e81406b1ff04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.837128  191080 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68
	I1211 23:56:18.837174  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I1211 23:56:18.895323  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 ...
	I1211 23:56:18.895360  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68: {Name:mka19cf3aa517a67c9823b9db6a0564ae2c88f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895568  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 ...
	I1211 23:56:18.895582  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68: {Name:mkcb32c8b3892cdbb32375c99cf73efb7e2d2ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895669  191080 certs.go:382] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt
	I1211 23:56:18.895740  191080 certs.go:386] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key
	I1211 23:56:18.895792  191080 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key
	I1211 23:56:18.895810  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt with IP's: []
	I1211 23:56:19.059957  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt ...
	I1211 23:56:19.059996  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt: {Name:mkeece2e2a9106cbaddd7935ae5c93b8b6536c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060202  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key ...
	I1211 23:56:19.060217  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key: {Name:mk7fa3201305a84265a30d592c7bfaa4ea9d3d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060422  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 23:56:19.060478  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:56:19.060506  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:56:19.060532  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem (1675 bytes)
	I1211 23:56:19.061341  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:56:19.104179  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:56:19.148345  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:56:19.191324  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 23:56:19.230603  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:56:19.274335  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:56:19.314103  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:56:19.355420  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:56:19.392791  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:56:19.429841  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:56:19.455328  191080 ssh_runner.go:195] Run: openssl version
	I1211 23:56:19.463919  191080 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.478287  191080 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 23:56:19.494141  191080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501262  191080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501357  191080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.511987  191080 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 23:56:19.527366  191080 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 23:56:19.544629  191080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:56:19.551139  191080 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:56:19.551211  191080 kubeadm.go:401] StartCluster: {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:56:19.551367  191080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:56:19.551501  191080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:56:19.601329  191080 cri.go:89] found id: ""
	I1211 23:56:19.601414  191080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:56:19.615890  191080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:56:19.632616  191080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:56:19.646731  191080 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:56:19.646765  191080 kubeadm.go:158] found existing configuration files:
	
	I1211 23:56:19.646828  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:56:19.660106  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:56:19.660190  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:56:19.676276  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:56:19.690027  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:56:19.690116  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:56:19.705756  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.720625  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:56:19.720715  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.735359  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:56:19.750390  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:56:19.750481  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:56:19.766951  191080 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:56:19.839756  191080 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 23:56:19.839847  191080 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 23:56:19.990602  191080 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:56:19.990863  191080 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:56:19.991043  191080 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:56:20.010193  191080 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:56:20.165972  191080 out.go:252]   - Generating certificates and keys ...
	I1211 23:56:20.166144  191080 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 23:56:20.166252  191080 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 23:56:20.166347  191080 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:56:20.551090  191080 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:56:20.773761  191080 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:56:21.138092  191080 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 23:56:21.423874  191080 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 23:56:21.424042  191080 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:21.781372  191080 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 23:56:21.781631  191080 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:22.783972  191080 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:56:22.973180  191080 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:56:23.396371  191080 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 23:56:23.396644  191080 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:56:23.822810  191080 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:56:24.134647  191080 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:56:24.293087  191080 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:56:24.542047  191080 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:56:24.865144  191080 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:56:24.865682  191080 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:56:24.869746  191080 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:56:24.871219  191080 out.go:252]   - Booting up control plane ...
	I1211 23:56:24.871351  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:56:24.871523  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:56:24.871597  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:56:24.889102  191080 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:56:24.889275  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 23:56:24.898513  191080 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 23:56:24.899113  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:56:24.899188  191080 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 23:56:25.090240  191080 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:56:25.090397  191080 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:56:26.591737  191080 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502403531s
	I1211 23:56:26.595003  191080 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1211 23:56:26.595170  191080 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.2:8443/livez
	I1211 23:56:26.595328  191080 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1211 23:56:26.595488  191080 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1211 23:56:29.712995  191080 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.118803589s
	I1211 23:56:31.068676  191080 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.475444759s
	I1211 23:56:33.595001  191080 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002476016s
	I1211 23:56:33.626020  191080 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:56:33.642768  191080 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:56:33.672411  191080 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:56:33.672732  191080 kubeadm.go:319] [mark-control-plane] Marking the node addons-081397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:56:33.697567  191080 kubeadm.go:319] [bootstrap-token] Using token: fx6xk6.14clsj7mtuippxxx
	I1211 23:56:33.699696  191080 out.go:252]   - Configuring RBAC rules ...
	I1211 23:56:33.699861  191080 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:56:33.705146  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:56:33.724431  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:56:33.735134  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:56:33.742267  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:56:33.751087  191080 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:56:34.005984  191080 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:56:34.545250  191080 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1211 23:56:35.004202  191080 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1211 23:56:35.005119  191080 kubeadm.go:319] 
	I1211 23:56:35.005179  191080 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1211 23:56:35.005184  191080 kubeadm.go:319] 
	I1211 23:56:35.005261  191080 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1211 23:56:35.005268  191080 kubeadm.go:319] 
	I1211 23:56:35.005289  191080 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1211 23:56:35.005347  191080 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:56:35.005431  191080 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:56:35.005483  191080 kubeadm.go:319] 
	I1211 23:56:35.005568  191080 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1211 23:56:35.005579  191080 kubeadm.go:319] 
	I1211 23:56:35.005647  191080 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:56:35.005662  191080 kubeadm.go:319] 
	I1211 23:56:35.005707  191080 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1211 23:56:35.005772  191080 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:56:35.005838  191080 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:56:35.005844  191080 kubeadm.go:319] 
	I1211 23:56:35.005915  191080 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:56:35.005983  191080 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1211 23:56:35.005989  191080 kubeadm.go:319] 
	I1211 23:56:35.006133  191080 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006283  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae \
	I1211 23:56:35.006317  191080 kubeadm.go:319] 	--control-plane 
	I1211 23:56:35.006322  191080 kubeadm.go:319] 
	I1211 23:56:35.006403  191080 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:56:35.006410  191080 kubeadm.go:319] 
	I1211 23:56:35.006504  191080 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006639  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae 
	I1211 23:56:35.009065  191080 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:56:35.009128  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:35.009169  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:35.012077  191080 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 23:56:35.013875  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 23:56:35.030825  191080 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 23:56:35.061826  191080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:56:35.061965  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.061967  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-081397 minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-081397 minikube.k8s.io/primary=true
	I1211 23:56:35.142016  191080 ops.go:34] apiserver oom_adj: -16
	I1211 23:56:35.257509  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.758327  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.257620  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.757733  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.258377  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.758134  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.258440  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.758050  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.258437  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.757704  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.258657  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.495051  191080 kubeadm.go:1114] duration metric: took 5.433189491s to wait for elevateKubeSystemPrivileges
	I1211 23:56:40.495110  191080 kubeadm.go:403] duration metric: took 20.943905559s to StartCluster
	I1211 23:56:40.495141  191080 settings.go:142] acquiring lock: {Name:mkc54bc00cde7f692cc672e67ab0af4ae6a15c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.495326  191080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:56:40.495951  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/kubeconfig: {Name:mkdf9d6588b522077beb3bc03f9eff4a2b248de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.496234  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:56:40.496280  191080 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:56:40.496340  191080 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:56:40.496488  191080 addons.go:70] Setting yakd=true in profile "addons-081397"
	I1211 23:56:40.496513  191080 addons.go:239] Setting addon yakd=true in "addons-081397"
	I1211 23:56:40.496519  191080 addons.go:70] Setting inspektor-gadget=true in profile "addons-081397"
	I1211 23:56:40.496555  191080 addons.go:239] Setting addon inspektor-gadget=true in "addons-081397"
	I1211 23:56:40.496571  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496571  191080 addons.go:70] Setting ingress=true in profile "addons-081397"
	I1211 23:56:40.496589  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.496605  191080 addons.go:239] Setting addon ingress=true in "addons-081397"
	I1211 23:56:40.496607  191080 addons.go:70] Setting metrics-server=true in profile "addons-081397"
	I1211 23:56:40.496619  191080 addons.go:70] Setting ingress-dns=true in profile "addons-081397"
	I1211 23:56:40.496623  191080 addons.go:239] Setting addon metrics-server=true in "addons-081397"
	I1211 23:56:40.496630  191080 addons.go:70] Setting cloud-spanner=true in profile "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting registry-creds=true in profile "addons-081397"
	I1211 23:56:40.496643  191080 addons.go:70] Setting gcp-auth=true in profile "addons-081397"
	I1211 23:56:40.496649  191080 addons.go:239] Setting addon cloud-spanner=true in "addons-081397"
	I1211 23:56:40.496652  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496658  191080 addons.go:239] Setting addon registry-creds=true in "addons-081397"
	I1211 23:56:40.496662  191080 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.496670  191080 mustload.go:66] Loading cluster: addons-081397
	I1211 23:56:40.496674  191080 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-081397"
	I1211 23:56:40.496687  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496694  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496707  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496846  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.497455  191080 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-081397"
	I1211 23:56:40.497568  191080 addons.go:70] Setting registry=true in profile "addons-081397"
	I1211 23:56:40.497576  191080 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:40.497609  191080 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-081397"
	I1211 23:56:40.497628  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497632  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-081397"
	I1211 23:56:40.497653  191080 addons.go:70] Setting volcano=true in profile "addons-081397"
	I1211 23:56:40.497674  191080 addons.go:239] Setting addon volcano=true in "addons-081397"
	I1211 23:56:40.497708  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497837  191080 addons.go:70] Setting volumesnapshots=true in profile "addons-081397"
	I1211 23:56:40.497852  191080 addons.go:239] Setting addon volumesnapshots=true in "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting default-storageclass=true in profile "addons-081397"
	I1211 23:56:40.497876  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497894  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-081397"
	I1211 23:56:40.496631  191080 addons.go:239] Setting addon ingress-dns=true in "addons-081397"
	I1211 23:56:40.498289  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496621  191080 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.498652  191080 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-081397"
	I1211 23:56:40.498685  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499011  191080 addons.go:70] Setting storage-provisioner=true in profile "addons-081397"
	I1211 23:56:40.499034  191080 addons.go:239] Setting addon storage-provisioner=true in "addons-081397"
	I1211 23:56:40.499062  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496606  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497596  191080 addons.go:239] Setting addon registry=true in "addons-081397"
	I1211 23:56:40.496653  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499671  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.500663  191080 out.go:179] * Verifying Kubernetes components...
	I1211 23:56:40.502382  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:40.503922  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.506960  191080 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:56:40.507005  191080 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1211 23:56:40.507060  191080 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1211 23:56:40.506993  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.507197  191080 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-081397"
	I1211 23:56:40.507613  191080 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	W1211 23:56:40.508273  191080 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:56:40.508767  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.508846  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:56:40.508884  191080 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:56:40.508983  191080 addons.go:239] Setting addon default-storageclass=true in "addons-081397"
	I1211 23:56:40.509037  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.509123  191080 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:40.509134  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:56:40.509862  191080 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:40.509879  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1211 23:56:40.510705  191080 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1211 23:56:40.510765  191080 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:40.510708  191080 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:56:40.510709  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:56:40.510780  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:56:40.510963  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:56:40.512352  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1211 23:56:40.512423  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:40.512795  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1211 23:56:40.513366  191080 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1211 23:56:40.513405  191080 out.go:179]   - Using image docker.io/registry:3.0.0
	I1211 23:56:40.513427  191080 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:40.513856  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:56:40.513452  191080 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:56:40.513569  191080 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1211 23:56:40.514419  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:56:40.514823  191080 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:56:40.515501  191080 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:40.515566  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1211 23:56:40.516012  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.516028  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:56:40.516032  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:40.516099  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:56:40.516097  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:56:40.516114  191080 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:56:40.517202  191080 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:40.517226  191080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:56:40.517560  191080 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1211 23:56:40.517676  191080 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:56:40.517948  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:40.517967  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:56:40.519009  191080 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:56:40.519029  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:56:40.519106  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:56:40.520326  191080 out.go:179]   - Using image docker.io/busybox:stable
	I1211 23:56:40.521667  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:56:40.521748  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:40.521773  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:56:40.523191  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.524446  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:56:40.524538  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525508  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.525522  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525556  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526184  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526857  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.526995  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526987  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.527300  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:56:40.526876  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528176  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528215  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528450  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.528655  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528687  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528793  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.529400  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530020  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:56:40.530078  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530252  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.530288  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531125  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531509  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.531550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.531581  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531691  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532336  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.532490  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532676  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532971  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533016  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.533392  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:56:40.533786  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.533419  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533922  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534209  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534245  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534763  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534785  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.534834  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534900  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535083  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.535167  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535342  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535606  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:56:40.535631  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:56:40.535965  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536268  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536305  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536400  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536418  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536548  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536583  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536615  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536653  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536963  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536994  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.537838  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.537879  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.538098  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.540825  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541431  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.541502  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541709  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	W1211 23:56:41.043758  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043809  191080 retry.go:31] will retry after 311.842554ms: ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	W1211 23:56:41.043894  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043909  191080 retry.go:31] will retry after 329.825082ms: ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.808354  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:56:41.808403  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:56:41.861654  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:56:41.861692  191080 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:56:41.896943  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:41.918961  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:41.924444  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:41.946144  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:42.009856  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:56:42.009896  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:56:42.018699  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:42.069883  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:42.072418  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:42.145123  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:42.186767  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:56:42.186812  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:56:42.259103  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:42.428120  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.93183404s)
	I1211 23:56:42.428248  191080 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.925817571s)
	I1211 23:56:42.428352  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:42.428498  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:56:42.452426  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:56:42.452489  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:56:42.484208  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:56:42.484275  191080 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:56:42.588545  191080 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:56:42.588585  191080 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:56:42.633670  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:56:42.633723  191080 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:56:42.637947  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:42.706175  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:56:42.706217  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:56:42.968807  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:56:42.968847  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:56:43.007497  191080 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.007532  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:56:43.028368  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:56:43.028403  191080 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:56:43.092788  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.092826  191080 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:56:43.128649  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:56:43.128687  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:56:43.289535  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:56:43.289580  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:56:43.346982  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:43.347023  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:56:43.401818  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.523249  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.586597  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:56:43.586642  191080 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:56:43.774067  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:56:43.774118  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:56:43.801000  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:44.025438  191080 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.025490  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:56:44.174620  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.277572584s)
	I1211 23:56:44.174769  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.250262195s)
	I1211 23:56:44.193708  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:56:44.193737  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:56:44.555609  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.920026  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:56:44.920060  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:56:45.697268  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:56:45.697305  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:56:46.254763  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:56:46.254799  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:56:46.581598  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:46.581642  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:56:46.687719  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:47.971016  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:56:47.975173  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976154  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:47.976199  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976614  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.491380  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:56:48.692419  191080 addons.go:239] Setting addon gcp-auth=true in "addons-081397"
	I1211 23:56:48.692544  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:48.695342  191080 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:56:48.698779  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699427  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:48.699601  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699980  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.892556  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.973548228s)
	I1211 23:56:49.408333  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.462135831s)
	I1211 23:56:49.408425  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.389664864s)
	I1211 23:56:51.938139  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.865666864s)
	I1211 23:56:51.938187  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.793007267s)
	I1211 23:56:51.938385  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.679223761s)
	I1211 23:56:51.938486  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.509912418s)
	I1211 23:56:51.938505  191080 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.510132207s)
	I1211 23:56:51.938523  191080 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:56:51.938693  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.300704152s)
	I1211 23:56:51.938740  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.868817664s)
	I1211 23:56:51.938763  191080 addons.go:495] Verifying addon ingress=true in "addons-081397"
	I1211 23:56:51.938775  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.536910017s)
	I1211 23:56:51.938799  191080 addons.go:495] Verifying addon registry=true in "addons-081397"
	I1211 23:56:51.939144  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.415830154s)
	I1211 23:56:51.939191  191080 addons.go:495] Verifying addon metrics-server=true in "addons-081397"
	I1211 23:56:51.939242  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.138197843s)
	I1211 23:56:51.939362  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.383652629s)
	W1211 23:56:51.939405  191080 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939434  191080 retry.go:31] will retry after 326.794424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939960  191080 node_ready.go:35] waiting up to 6m0s for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.941538  191080 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-081397 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:56:51.941540  191080 out.go:179] * Verifying registry addon...
	I1211 23:56:51.941553  191080 out.go:179] * Verifying ingress addon...
	I1211 23:56:51.943990  191080 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:56:51.944213  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:56:51.964791  191080 node_ready.go:49] node "addons-081397" is "Ready"
	I1211 23:56:51.964839  191080 node_ready.go:38] duration metric: took 24.813054ms for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.964861  191080 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:56:51.964931  191080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:56:52.001706  191080 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:56:52.001747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.002821  191080 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:56:52.002849  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.266441  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:52.467902  191080 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-081397" context rescaled to 1 replicas
	I1211 23:56:52.469927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.473967  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.974199  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.067246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.644012  191080 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.948623338s)
	I1211 23:56:53.644102  191080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.679150419s)
	I1211 23:56:53.644155  191080 api_server.go:72] duration metric: took 13.147840239s to wait for apiserver process to appear ...
	I1211 23:56:53.644280  191080 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:56:53.644328  191080 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I1211 23:56:53.644007  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.956173954s)
	I1211 23:56:53.644412  191080 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:53.646266  191080 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:56:53.647231  191080 out.go:179] * Verifying csi-hostpath-driver addon...
	I1211 23:56:53.648911  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:53.650424  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:56:53.650455  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:56:53.650539  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:56:53.695860  191080 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I1211 23:56:53.698147  191080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:56:53.698187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.714330  191080 api_server.go:141] control plane version: v1.34.2
	I1211 23:56:53.714403  191080 api_server.go:131] duration metric: took 70.105256ms to wait for apiserver health ...
	I1211 23:56:53.714423  191080 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:56:53.722159  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:56:53.722205  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:56:53.741176  191080 system_pods.go:59] 20 kube-system pods found
	I1211 23:56:53.741243  191080 system_pods.go:61] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.741269  191080 system_pods.go:61] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.741279  191080 system_pods.go:61] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.741289  191080 system_pods.go:61] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.741297  191080 system_pods.go:61] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.741307  191080 system_pods.go:61] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.741316  191080 system_pods.go:61] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.741323  191080 system_pods.go:61] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.741330  191080 system_pods.go:61] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.741340  191080 system_pods.go:61] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.741347  191080 system_pods.go:61] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.741358  191080 system_pods.go:61] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.741367  191080 system_pods.go:61] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.741382  191080 system_pods.go:61] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.741390  191080 system_pods.go:61] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.741401  191080 system_pods.go:61] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.741414  191080 system_pods.go:61] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.741427  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741445  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741455  191080 system_pods.go:61] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.741497  191080 system_pods.go:74] duration metric: took 27.063753ms to wait for pod list to return data ...
	I1211 23:56:53.741514  191080 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:56:53.789135  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.789157  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:56:53.793775  191080 default_sa.go:45] found service account: "default"
	I1211 23:56:53.793806  191080 default_sa.go:55] duration metric: took 52.279991ms for default service account to be created ...
	I1211 23:56:53.793821  191080 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:56:53.844257  191080 system_pods.go:86] 20 kube-system pods found
	I1211 23:56:53.844307  191080 system_pods.go:89] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.844317  191080 system_pods.go:89] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.844326  191080 system_pods.go:89] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.844334  191080 system_pods.go:89] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.844340  191080 system_pods.go:89] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.844352  191080 system_pods.go:89] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.844358  191080 system_pods.go:89] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.844364  191080 system_pods.go:89] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.844369  191080 system_pods.go:89] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.844377  191080 system_pods.go:89] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.844387  191080 system_pods.go:89] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.844394  191080 system_pods.go:89] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.844407  191080 system_pods.go:89] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.844416  191080 system_pods.go:89] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.844429  191080 system_pods.go:89] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.844439  191080 system_pods.go:89] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.844475  191080 system_pods.go:89] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.844488  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844498  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844507  191080 system_pods.go:89] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.844519  191080 system_pods.go:126] duration metric: took 50.689154ms to wait for k8s-apps to be running ...
	I1211 23:56:53.844532  191080 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:56:53.844608  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:56:53.902002  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.955676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.955845  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.160809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.448357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.453907  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.660400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.960099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.962037  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.993140  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.726594297s)
	I1211 23:56:54.993153  191080 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.148518984s)
	I1211 23:56:54.993221  191080 system_svc.go:56] duration metric: took 1.148683395s WaitForService to wait for kubelet
	I1211 23:56:54.993231  191080 kubeadm.go:587] duration metric: took 14.496919105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:56:54.993249  191080 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:56:55.001998  191080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1211 23:56:55.002046  191080 node_conditions.go:123] node cpu capacity is 2
	I1211 23:56:55.002095  191080 node_conditions.go:105] duration metric: took 8.839368ms to run NodePressure ...
	I1211 23:56:55.002114  191080 start.go:242] waiting for startup goroutines ...
	I1211 23:56:55.161169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.517092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.539796  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.579689  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.677622577s)
	I1211 23:56:55.581053  191080 addons.go:495] Verifying addon gcp-auth=true in "addons-081397"
	I1211 23:56:55.583166  191080 out.go:179] * Verifying gcp-auth addon...
	I1211 23:56:55.585775  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:56:55.610126  191080 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:56:55.610157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.684117  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.957671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.958053  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.094446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.159426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.454250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.454305  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.593123  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.698651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.955164  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.955254  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.097317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.160266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.455193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.593869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.657455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.952124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.953630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.091657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.192765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.448854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.454640  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.590861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.656664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.951563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.951970  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.092726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.156085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.453106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.455110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.594050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.659663  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.950597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.953854  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.098806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.158739  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.451426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.656305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.954392  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.957143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.089837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.157925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.451549  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.451947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.592758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.950524  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.091801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.155816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.449634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.450369  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.591242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.655088  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.952327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.952622  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.090558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.166505  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.449517  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.450499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.590638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.656141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.950487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.950653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.092052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.164233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.452727  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.453010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.590564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.658766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.956776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.960214  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.089595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.158346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.454648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.455366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.589445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.725092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.950042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.953003  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.093507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.156581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.448896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.452118  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.589736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.660370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.952602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.952699  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.093794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.159924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.452486  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.593007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.655785  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.955585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.955714  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.092772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.159691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.452421  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.453004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.596649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.657754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.151194  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.163928  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.166605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.166806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.452575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.452859  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.591132  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.658223  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.953976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.958754  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.097815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.160643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.449852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.449848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.593346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.655349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.951129  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.958386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.091038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.163797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.451681  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.455196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.594544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.665061  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.951173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.952848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.093150  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.157974  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.452312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.591441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.661703  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.958989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.960103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.089485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.156074  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.452932  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.453001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.592446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.658121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.962529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.963557  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.091969  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.158221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.449389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.450691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.594295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.659320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.949072  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.952087  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.089407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.155332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.813442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.813494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.813503  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.813799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.954853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.957241  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.091368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.157225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.462043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.465005  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.590303  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.693434  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.948523  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.948597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.090370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.155629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.450403  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.450602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.592008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.656775  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.952011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.953801  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.090174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.155951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.447617  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.448323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.590230  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.656537  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.948537  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.090670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.156440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.448193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.449148  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.589950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.655094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.949387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.950227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.155631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.448262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.449009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.655779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.952599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.952790  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.090743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.154683  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.451260  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.452256  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.593154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.656811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.109419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.111778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.111954  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.158011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.452303  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.452748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.590963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.655856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.949568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.949619  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.091094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.155741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.449880  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.449919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.590590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.658406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.948819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.949527  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.090686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.154696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.449105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.449431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.591490  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.656162  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.948671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.948867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.089628  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.157506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.448637  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.449144  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.589959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.654962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.949839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.950510  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.091561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.448681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.590622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.657217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.948184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.950039  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.089200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.155324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.449676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.449798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.590267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.655290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.948982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.090233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.155268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.448106  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.448387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.589756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.656215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.948715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.949727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.155563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.448981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.449967  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.589372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.656746  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.951190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.951266  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.089966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.156024  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.449807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:30.449940  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.592795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.655965  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.949686  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.949854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.089144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.155728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.448249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.451576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.590176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.656389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.949905  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.950451  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.090191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.156400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.449602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:32.449836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.591164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.657213  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.948520  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.948804  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.089649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.156050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.590456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.656274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.949256  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.949347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.091203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.156547  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.450354  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.450411  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:34.591349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.656156  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.948431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.948893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.089378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.156784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.450919  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:35.451766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.589587  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.656818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.949417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.950715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.090779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.155710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.452002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.452240  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.590343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.655697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.949354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.949385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.091333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.155660  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.448936  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.449075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.590116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.656050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.949528  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.950239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.156630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.449400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.449825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.590511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.655832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.948985  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.949093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.090158  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.155820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.449629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.451242  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.590400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.656829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.949106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.089281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.156612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.450580  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.590980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.655008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.949712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.949853  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.089939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.155401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.448080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.451541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.590421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.656608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.950025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.950358  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.090340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.159954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.450058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:42.450329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.589818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.655716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.948985  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.952252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.090380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.155314  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.450015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.450202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.655086  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.948401  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.949453  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.090744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.154784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:44.449642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.590645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.656686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.950021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.951009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.090020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.155822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:45.449646  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.656192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.949128  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.949580  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.091176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.155290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.448997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.450442  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:46.590802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.654435  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.949893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.950255  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.091631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.156353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.450093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.455744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.622817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.657485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.951291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.953670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.093758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.155393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.452298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.452366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.592111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.657572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.951626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.952512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.091082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.157173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.452908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.453973  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.591765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.699112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.951994  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.953086  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.090983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.162358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.452611  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.453823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.593450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.664907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.961300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.961709  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.105008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.168542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.460773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.463367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.596820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.659982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.954007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.956978  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.090564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.156735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.459306  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.461605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.591646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.659476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.949249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.949360  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.091342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.158735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.451408  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.454585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.590776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.656237  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.954524  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.954679  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.095794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.159448  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.576047  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.576308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.590001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.659406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.950589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.950691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.092084  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.157456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.451531  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.451907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:55.590653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.655648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.949374  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.953638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.090027  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.156602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.448573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:56.448625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.593728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.658937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.952879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.952929  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.091934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.159057  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.451436  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:57.455516  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.591262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.659040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.954096  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.955115  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.092045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.156829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.449510  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:58.452029  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.591835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.655523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.950729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.951027  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.091806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.192766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.450923  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.450927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:59.589799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.654677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.950001  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.950014  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.090853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.157042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.448336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:00.448337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.592094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.658087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.957344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.957336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.092515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.156002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.448332  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.450557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.590308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.655760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.948943  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.948994  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.090034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.155101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.448750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.451925  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.591378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.692860  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.948711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.949373  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.090905  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.155274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.564036  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.566077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:03.589166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.656333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.950104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.951138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.090344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.155950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.449528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:04.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.655882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.949372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.949508  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.090348  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.156443  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.449652  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.449659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.590664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.657339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.948372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.949962  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.090065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.157993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.447621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.447687  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.589658  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.656748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.950654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.952348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.090424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.154888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.449307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.449391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.655886  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.949784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.950390  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.090645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.154567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.450533  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.451325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.590268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.657358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.950295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.950733  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.091051  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.155807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.449202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.449232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.590096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.654983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.950294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.950637  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.155487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.449477  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.450235  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.592429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.655383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.950193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.951385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.090841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.154640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.448065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:11.448340  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.590017  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.656300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.950170  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.950312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.156842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.450055  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:12.451008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.590233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.656044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.950138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.950258  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.090444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.155597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.449740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.449778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.591284  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.948617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.949836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.090622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.156895  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.450176  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.589623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.656671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.950841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.951121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.090529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.155811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.449246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.449410  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.591082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.656904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.949103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.949272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.090640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.155039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.447514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.449003  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.589821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.655674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.952654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.953063  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.091612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.159499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.449631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:17.449881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.590494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.655629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.951351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.951511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.090316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.155509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.450535  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:18.451342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.591041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.655519  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.949171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.949503  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.089765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.155836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.449076  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.452236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.590791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.655570  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.949527  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.949612  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.090142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.154962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.448016  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:20.450402  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.589309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.655296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.949277  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.951681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.089881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.154879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.448360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.448858  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.589856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.655417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.949400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.949574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.090271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.155368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.449742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.450560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:22.591054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.656707  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.950712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.950890  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.091160  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.451079  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:23.451281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.590720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.654815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.950160  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.950337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.090330  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.156001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.447566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.450052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:24.591509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.656932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.949405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.449568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.450447  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.654957  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.950271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.951174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.091002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.155568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.449372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.449561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.590898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.656087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.951452  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.953541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.091542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.155995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.451595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.452488  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.591591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.657762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.949590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.952182  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.090479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.155291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.450004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.590103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.655339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.953363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.954717  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.093694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.155028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.449055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.450347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.590581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.656654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.950515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.950799  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.090326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.155485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.448572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.449692  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.590878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.655807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.956951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.957577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.092534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.155903  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.449802  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.450326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:31.593269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.656218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.949934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.091982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.155603  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.449522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.451425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:32.590687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.655082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.950545  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.950713  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.091712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.156900  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:33.451121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.592756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.655387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.956059  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.956346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.155676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:34.449255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.589931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.655778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.950791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.951042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.089716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.155182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.447641  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:35.590101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.949158  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.951312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.090687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.448272  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.448489  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.591352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.657569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.950696  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.952142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.090121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.155891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.448859  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:37.449811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.589598  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.655164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.950606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.950726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.089931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.155402  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.449956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:38.450889  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.590982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.655741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.950070  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.950118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.090737  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.156071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.448413  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:39.448760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.590316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.655228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.948192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.948232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.089574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.156012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.448864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.451601  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.592083  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.656209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.949127  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.091091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.449778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:41.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.589659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.656116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.949174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.949802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.090816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.155802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.450496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.452958  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.591015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.655595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.949982  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.091554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.155772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.451215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.451399  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:43.590489  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.655665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.949328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.950974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.092276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.155455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.449429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:44.449512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.591046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.949500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.951599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.094722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.154774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.449770  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:45.451691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.590761  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.655352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.949864  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.156103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.449181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.449779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.591976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.655596  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.949173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.950623  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.093977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.156056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.450281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:47.450897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.591849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.655891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.950318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.951578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.091959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.154872  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.450075  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.451948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.589733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.655026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.947902  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.948922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.090363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.449018  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:49.449294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.589648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.654518  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.949085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.949327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.089715  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.155336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.450276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:50.450610  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.590265  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.655617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.951287  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.155403  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.449820  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.451010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.591075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.654839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.949284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.950009  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.090582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.157494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.448608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.450368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:52.590998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.655180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.948718  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.950284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.090712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.158605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.451168  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.451536  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.589760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.657022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.948734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.951371  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.090202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.155484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.448582  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.450090  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.589620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.656268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.950155  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.950342  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.092526  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.155567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.448897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.450647  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:55.590184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.656034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.948843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.092633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.155535  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.449050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.450032  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:56.589978  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.655578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.951227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.951391  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.089968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.156011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.449111  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.449543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.591323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.656295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.949838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.950157  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.090263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.155586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.450591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:58.450796  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.590735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.655042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.948769  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.949101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.089480  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.156356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.450318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.452097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.589757  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.656038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.951264  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.955025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.093307  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.169810  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.453668  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.453747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.591664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.662082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.958327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.958678  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.093618  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.191821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.455185  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.458398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.593233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.657309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.950520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.956319  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.092841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.158368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.454368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.454386  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:02.592341  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.658118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.969970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.970262  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.091543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.193034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.478206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.494398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.601217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.659210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.956276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.961174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.090843  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.154383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.451688  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.451709  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:04.590930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.656263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.949363  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.950133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.102487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.156487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.456245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.457922  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:05.596196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.660935  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.949095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.954162  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.098801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.161484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.448923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:06.452592  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.590210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.659607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.954480  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.955630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.094202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.161252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.451546  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:07.451627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.599662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.951554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.951751  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.096946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.157724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.453200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.453207  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.592711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.695126  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.958140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.958561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.090111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.155633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.450116  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:09.595338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.656262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.950903  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.951773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.089779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.155949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.448520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.449409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.599275  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.659673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.948979  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.950560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.090875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.155105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.449575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:11.450246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.600631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.658293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.950730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.950966  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.090374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.157299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.449320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.449345  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:12.593214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.664092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.950600  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.951052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.154911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.450841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.450957  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.592263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.655883  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.948080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.948214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.089646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.157040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.448769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.449141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:14.590729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.654626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.949399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.951103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.092446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.156294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.452499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:15.452500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.590621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.657627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.951795  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.952077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.089680  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.156176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.448324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.448431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.656743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.948906  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.949692  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.091257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.155187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.450607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.450848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.589365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.655407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.948856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.949516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.091507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.155888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.451560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.452505  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:18.590970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.655165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.947845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.949425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.090758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.154844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.450275  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.451846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.589989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.655133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.950045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.950331  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.090153  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.155708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:20.591627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.655924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.947974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.948912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.089422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.155655  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.449734  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:21.589919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.657291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.949251  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.952143  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.091386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.157354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.448913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.449226  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.590529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.657745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.948933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.089214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.157137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.450670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.450902  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.590522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.656379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.950625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.091355  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.157165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.448825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.453054  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.590234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.657541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.949335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.951024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.092211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.154931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.448939  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.589597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.948738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.949046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.091849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.154651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.448387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.448440  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.590516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.656196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.949998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.950307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.090220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.156297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.450874  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:27.451092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.655496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.949431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.951612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.090350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.155161  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.448979  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.449151  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:28.589861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.655413  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.949789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.951855  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.090331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.157070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.449482  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.450006  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.590813  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.655573  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.949907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.950025  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.091458  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.158405  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.447779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.448834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:30.591091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.655875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.950684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.953289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.091332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.448823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:31.450781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.591809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.656075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.948759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.948968  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.091729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.154747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.449837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.590571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.655646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.949282  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.949595  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.090694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.155167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.451071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.591171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.656119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.949262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.949454  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.090283  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.155781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.450392  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.451683  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.591571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.655909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.949219  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.949408  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.089980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.154740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.450095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:35.450349  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.591227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.692481  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.949141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.951867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.090822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.156098  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.448722  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:36.449538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.589624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.657137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.948984  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.949366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.091350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.157094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.448182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:37.450253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.656425  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.948975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.089759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.155828  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.451552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.451647  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:38.589973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.655877  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.091390  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.405012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.452050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:39.452196  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.595044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.665344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.953209  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.953555  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.092147  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.155320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.451110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.451951  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.591316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.655931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.950017  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.951388  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.090401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.155143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.448442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.449115  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:41.591565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.656306  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.949112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.949534  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.091549  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.155830  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.449887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.450125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:42.591409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.658038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.948502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.951166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.090200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.450320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:43.450913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.592334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.656125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.948166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.949168  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.089675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.155311  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.447960  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.449667  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.592196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.655822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.952752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.952747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.155289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.448550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:45.449206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.593908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.656032  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.949589  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.949968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.089906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.156255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.448239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:46.448309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.590954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.656897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.952416  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.090308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.156374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.449653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:47.450249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.655793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.948702  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.948870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.089879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.155753  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.448357  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.450383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.590577  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.656031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.948556  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.950049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.089412  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.449163  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:49.449205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.590039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.655560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.949665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.950181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.155293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.448667  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:50.449257  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.590165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.655541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.950136  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.951139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.092122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.155044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.448983  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.449212  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.595578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.696454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.949343  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.949398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.090651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.156291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.449203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.449249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.590754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.654991  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.948372  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.948385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.091609  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.156662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.451318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:53.590507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.658421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.949207  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.949267  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.090069  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.155373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.448514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.449541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.591594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.656653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.949522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.950322  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.092501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.156404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.449073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.449090  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.590073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.662793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.950067  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.950494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.089914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.155999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.449360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.449507  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.590986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.655362  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.949305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.950892  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.090093  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.156315  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.449124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.449348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.589882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.949449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.949662  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.090373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.155727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.449522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.450610  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:58.592143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.656332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.948528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.949696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.090864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.155322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.450350  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.450722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.590799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.655636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.950576  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.950769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.090194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.156754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.449577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:00.450369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.591719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.655897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.950338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.950455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.090278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.156266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.452423  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.453174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.657914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.948554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.948798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.090601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.157198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.447995  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.448026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:02.590202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.657562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.949780  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.952121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.090613  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.155733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.449937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.449933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:03.590926  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.658062  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.949170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.949810  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.091414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.155665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.448744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.448999  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.589836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.656449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.948744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.948893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.091208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.156906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.449064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.449106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.590901  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.656845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.950206  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.950384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.090523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.155990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.449777  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.450837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.590182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.656853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.948285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.948607  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.089995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.449281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.590385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.656186  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.948484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.949766  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.090129  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.156334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.449099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:08.590056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.655353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.948870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.949837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.089440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.155221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.448572  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.449128  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.590937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.655774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.950643  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.091963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.157142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.447872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:10.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.590167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.655410  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.948681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.950881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.157845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.448987  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.451786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:11.589278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.656898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.951845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.089679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.448020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.448554  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.591529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.657808  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.949844  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.950340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.156837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.449856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:13.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.590325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.656633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.951242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.951287  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.089997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.155198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.448400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:14.448585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.590551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.656896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.951119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.090441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.155404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.451852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:15.452266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.591327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.656049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.952977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.953024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.093981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.156724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.451378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.592066  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.657270  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.948987  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.949080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.090484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.158533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.591576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.656835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.952242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.952334  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.091234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.156793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.450858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.451103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.590911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.655661  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.950840  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.091124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.154772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.449026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.451771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.590291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.657065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.951653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.089930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.156561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.448782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:20.453763  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.591804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.655268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.948366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.948454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.090508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.158196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.449441  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:21.590734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.655940  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.950169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.950328  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.089889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.157558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.449534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:22.449815  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.590378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.655963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.947894  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.948182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.090569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.156273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.450639  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.450816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.589218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.655281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.949543  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.949989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.090664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.155725  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.449198  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:24.451299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.590352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.656205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.947767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.948451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.090431  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.156379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.449358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.449672  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.654878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.949904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.950152  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.089797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.449336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.450596  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.592346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.657848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.949333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.950229  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.090752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.157107  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.449820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:27.450010  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.590514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.657927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.951547  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.952176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.156767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.451522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:28.591521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.656790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.949538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.949826  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.090055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.155834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.450097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:29.450167  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.590630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.655299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.949633  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.089708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.154762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.450366  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.655870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.948836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.948972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.089244  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.155334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.448853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.449043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.590675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.655919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.950253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.951767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.089750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.155423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.449657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:32.590358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.656797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.950269  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.090803  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.154674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.454615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.454885  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.589479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.656942  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.953188  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.954139  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.091629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:34.449071  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.589301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.656551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.948611  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.950196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.091634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.160584  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.448684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.449322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.589630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.655232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.947899  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.090521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.155599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.449031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.449382  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:36.591743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.655255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.948722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.949779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.090918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.157590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.448713  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.449843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.589677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.950644  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.093262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.156220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.448943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.450543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.591971  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.655424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.949892  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.951285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.090837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:39.450060  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.590012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.655544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.949824  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.954336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.095357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.155946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.451271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.452848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:40.590990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.655214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.963350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.967975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.092691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.157255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.461052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.464606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:41.592100  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.658218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.951346  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.953539  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.170296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.449833  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:42.449879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.589631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.655925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.952512  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.953941  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.090620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.155937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.449805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.451726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.655839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.949267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.950221  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.091825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.158335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.448909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:44.450502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.590179  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.656226  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.948916  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.950140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.089907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.156705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.449149  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.449285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.590294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.655955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.948817  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.951525  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.091170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.155968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.448814  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.450026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.590257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.655476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.950202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.950358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.091544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.156635  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.448759  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:47.450582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.591771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.655438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.951589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.951950  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.091551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.155719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.449736  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.449931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:48.590742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.656337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.951175  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.951871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.089625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.154672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.449387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:49.451177  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.589995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.655055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.947911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.948323  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.090498  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.448625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.449769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.589819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.656445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.952353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.952565  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.091804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.155910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.449736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.452867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.590141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.655015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.949047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.951778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.091793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.156022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.448369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.448494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.592499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.657185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.948673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.949634  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.092041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.157391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.451159  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.451297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.592141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.655589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.949414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.949654  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.090980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.157644  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.449578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:54.449942  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.592657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.655642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.949887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.950225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.155383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.448872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.450478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.655878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.950573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.951643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.090608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.156601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.449633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.449740  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:56.589768  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.656648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.951253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.951560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.090880  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.155285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.450738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.452046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.590263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.657500  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.950152  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.950364  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.091638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.193386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.450222  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:58.450351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.591052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.656102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.948215  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.948208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.090720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.449636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.450870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.589564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.656184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.948230  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.948326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.091313  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.155446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.449997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.590931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.655953  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.949430  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.949437  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.156208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.452056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.452238  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:01.590869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.655918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.949266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.950875  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.094697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.155278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.456102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.456104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.591508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.657972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.950365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.950787  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.155749  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.451192  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.451626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:03.592675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.657198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.949710  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.950534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.090705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.154619  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.450263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.451232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.589795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.654975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.951313  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.952632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.093185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.156889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.448891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.452008  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.589422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.655673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.954272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.955495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.090800  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.166615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.451261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.451837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:06.592681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.655679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.949675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.949685  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.156385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.449932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:07.450455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.590827  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.655109  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.950064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.090887  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.154572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.450690  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.450871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.590523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.655973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.948114  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.949750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.090989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.155955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.449016  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.449347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.590817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.656200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.950977  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.951430  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.091695  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.155672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.448805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.449149  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.591881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.655305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.948943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.949765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.089778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.156846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.450576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.451630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.591910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.657557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.949551  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.951423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.090384  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.160393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.453917  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.453927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:12.593211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.659806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.963298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.966253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.093949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.194867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.468169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:13.473451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.607669  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.664252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.963788  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.970682  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.100566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.183386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.481122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.481147  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:14.591963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.659279  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.953923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.957839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.091640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.160946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.453995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.454249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.592976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.657252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.952201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.954099  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.091133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.159988  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.451755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:16.452022  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.593102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.657191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.949980  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.950946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.091395  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.156727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.453497  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.454292  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.590745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.658023  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.953077  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.954404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.144325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.163884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.506329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:18.507416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.598801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.658864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.951533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.951768  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.091399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.157617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.453370  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.453419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:19.590356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.656750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.949694  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.952780  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.093710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.162888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.455842  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:20.457429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.597047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.658966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.952832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.956314  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.093605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.160516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.449838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.590229  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.657324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.951876  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.955993  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.093456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.156844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.452923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.453818  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.591894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.664786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.950056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.950755  191080 kapi.go:107] duration metric: took 4m31.006766325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:01:23.091356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.164794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.496726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:23.601172  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.663423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.954300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.094097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.156533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.450111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.590446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.655954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.951486  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.101144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.157114  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.459936  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.589209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.655404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.949290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.091205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.192561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.594301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.695112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.950968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.090419  191080 kapi.go:107] duration metric: took 4m31.504642831s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:01:27.092322  191080 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-081397 cluster.
	I1212 00:01:27.093973  191080 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:01:27.095595  191080 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:01:27.155630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.448192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.656676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.949602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.156035  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.452122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.656798  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.951030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.155812  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.450030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.655506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.950947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.156571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.657986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.952997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.155349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.449194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.657440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.950318  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.157071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.449726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.657033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.950261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.156773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.450869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.950125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.449419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.663651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.951541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.156031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.450253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.655842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.948990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.156446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.449076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.656334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.949221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.155204  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.448992  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.656232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.948670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.155550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.655986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.950165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.156285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.448380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.656058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.950214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.157325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.449511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.656623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.952375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.157648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.449624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.657125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.951249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.157745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.451135  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.657530  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.949771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.450113  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.950157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.156180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.450046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.655809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.950604  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.155614  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.448273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.656354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.950705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.156364  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.448416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.658552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.949651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.158180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.452700  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.656868  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.949912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.156755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.451939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.656432  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.950201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.156157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.448332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.656228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.950259  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.157269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.950248  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.156922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.449858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.658522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.950331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.158342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.452607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.657583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.952541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.156712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.452538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.656385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.949617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.154792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.450797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.655995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.950745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.155328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.448751  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.655216  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.949363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.157592  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.451921  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.664544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.958059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.156884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.449911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.659329  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.950478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.157728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.449728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.656867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.950675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.158989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.450999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.661594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.948955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.160422  191080 kapi.go:107] duration metric: took 5m6.50988483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:02:00.450671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.952269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.449529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.950781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.450250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.953623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.451822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.951054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.452684  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.952913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.449851  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.951096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.448632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.949689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.450190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.449834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.949956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.449343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.953533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.448912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.950203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.450650  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.950028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.950015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.449762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.949166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.450181  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.448817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.948583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.449804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.951493  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.450240  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.951299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.450677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.949706  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.449531  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.450374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.951339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.950909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.477937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.951049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.448664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.949615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.449359  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.949444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.450501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.450825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.948894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.950004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.450317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.949495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.456871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.948730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.449752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.950182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.450002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.948690  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.448231  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.950565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.450626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.949823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.450102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.449033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.949643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.948018  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.455139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.950450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.450566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.949291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.450245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.951396  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.451789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.949099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.450082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.954847  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.450792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.949191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.449125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.949110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.453064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.948748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.449176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.448415  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.950829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.450193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.950076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.450726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.949133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.448258  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.949440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.944788  191080 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1212 00:02:51.944836  191080 kapi.go:107] duration metric: took 6m0.000623545s to wait for kubernetes.io/minikube-addons=registry ...
	W1212 00:02:51.944978  191080 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1212 00:02:51.946936  191080 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1212 00:02:51.948508  191080 addons.go:530] duration metric: took 6m11.452163579s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass storage-provisioner inspektor-gadget cloud-spanner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1212 00:02:51.948603  191080 start.go:247] waiting for cluster config update ...
	I1212 00:02:51.948631  191080 start.go:256] writing updated cluster config ...
	I1212 00:02:51.949105  191080 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:51.959702  191080 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:51.966230  191080 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.976818  191080 pod_ready.go:94] pod "coredns-66bc5c9577-prc7f" is "Ready"
	I1212 00:02:51.976851  191080 pod_ready.go:86] duration metric: took 10.502006ms for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.982130  191080 pod_ready.go:83] waiting for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.989125  191080 pod_ready.go:94] pod "etcd-addons-081397" is "Ready"
	I1212 00:02:51.989162  191080 pod_ready.go:86] duration metric: took 7.000579ms for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.992364  191080 pod_ready.go:83] waiting for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.000110  191080 pod_ready.go:94] pod "kube-apiserver-addons-081397" is "Ready"
	I1212 00:02:52.000155  191080 pod_ready.go:86] duration metric: took 7.740136ms for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.004027  191080 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.365676  191080 pod_ready.go:94] pod "kube-controller-manager-addons-081397" is "Ready"
	I1212 00:02:52.365718  191080 pod_ready.go:86] duration metric: took 361.647196ms for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.569885  191080 pod_ready.go:83] waiting for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.966570  191080 pod_ready.go:94] pod "kube-proxy-jwqpk" is "Ready"
	I1212 00:02:52.966607  191080 pod_ready.go:86] duration metric: took 396.689665ms for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.167508  191080 pod_ready.go:83] waiting for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566695  191080 pod_ready.go:94] pod "kube-scheduler-addons-081397" is "Ready"
	I1212 00:02:53.566729  191080 pod_ready.go:86] duration metric: took 399.188237ms for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566746  191080 pod_ready.go:40] duration metric: took 1.607005753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:53.630859  191080 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:02:53.633243  191080 out.go:179] * Done! kubectl is now configured to use "addons-081397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.711556317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498141711509185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=841221b7-4f98-4ada-87f1-e6746255b099 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.713775405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6152137-3246-40ec-bd15-edc2efd5d135 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.714020269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6152137-3246-40ec-bd15-edc2efd5d135 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.714616402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisione
r-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-
device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-pr
ovisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150
647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9a
da39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6152137-3246-40ec-bd15-edc2efd5d135 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.770057448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4afdf6d9-f4e6-40f3-aa99-6d0f3b3052bc name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.770153607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4afdf6d9-f4e6-40f3-aa99-6d0f3b3052bc name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.772404744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac374dd1-8184-4ede-847a-689c6072a53b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.773927075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498141773888752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac374dd1-8184-4ede-847a-689c6072a53b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.775581191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8cd66e6-d732-4d8e-b73e-4c9f824a8894 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.775677379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8cd66e6-d732-4d8e-b73e-4c9f824a8894 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.776098702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisione
r-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-
device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-pr
ovisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150
647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9a
da39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8cd66e6-d732-4d8e-b73e-4c9f824a8894 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.824692079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee50a669-3676-462d-9ba9-cbcccb6385b7 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.824781015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee50a669-3676-462d-9ba9-cbcccb6385b7 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.826838985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b16f310d-76ad-424b-9c9c-b0d86f358ca4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.828137402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498141828100405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b16f310d-76ad-424b-9c9c-b0d86f358ca4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.831105781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31d6c30a-38be-4e89-bab0-ee1b6331bddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.831208486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31d6c30a-38be-4e89-bab0-ee1b6331bddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.831530484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisione
r-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-
device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-pr
ovisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150
647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9a
da39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31d6c30a-38be-4e89-bab0-ee1b6331bddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.875565700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92510345-38e6-4163-8e72-b1648d658cf6 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.875678404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92510345-38e6-4163-8e72-b1648d658cf6 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.878061150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cd2a16c-4016-480f-9f07-c509c0ebb6e0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.879528720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765498141879455843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cd2a16c-4016-480f-9f07-c509c0ebb6e0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.881196278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a849737b-6070-419c-b610-60a491d619b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.881541720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a849737b-6070-419c-b610-60a491d619b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:09:01 addons-081397 crio[814]: time="2025-12-12 00:09:01.882275198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.has
h: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisione
r-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-
device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-pr
ovisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150
647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9a
da39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a849737b-6070-419c-b610-60a491d619b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	825fa31ff05b6       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                 4 minutes ago       Running             nginx                     0                   8c904991200ec       nginx                                     default
	25d63d362311b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                5 minutes ago       Running             busybox                   0                   32cdf5109ec8d       busybox                                   default
	86266748a7014       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac    11 minutes ago      Running             registry-proxy            0                   4f522a691840e       registry-proxy-fdnc8                      kube-system
	c0808e7e8387c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef   11 minutes ago      Running             local-path-provisioner    0                   c1b5ac0ad6da0       local-path-provisioner-648f6765c9-fpbst   local-path-storage
	0ee283d133145       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f           12 minutes ago      Running             amd-gpu-device-plugin     0                   d6396506b4332       amd-gpu-device-plugin-djxv6               kube-system
	636669d18a2e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   12 minutes ago      Running             storage-provisioner       0                   d4c844a547362       storage-provisioner                       kube-system
	079f9768ce55c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                   12 minutes ago      Running             coredns                   0                   241bbeea7c618       coredns-66bc5c9577-prc7f                  kube-system
	7f5ed4f373cfd       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                   12 minutes ago      Running             kube-proxy                0                   84c65d7d95ff4       kube-proxy-jwqpk                          kube-system
	7ace0e7fbfc94       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                   12 minutes ago      Running             kube-controller-manager   0                   f75b7d32aa473       kube-controller-manager-addons-081397     kube-system
	d8612fac71b8e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                   12 minutes ago      Running             kube-scheduler            0                   32069928e35e6       kube-scheduler-addons-081397              kube-system
	712e27a28f3ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                   12 minutes ago      Running             kube-apiserver            0                   78928c0146bf6       kube-apiserver-addons-081397              kube-system
	f00e427bcb7fb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                   12 minutes ago      Running             etcd                      0                   d442318c9ea69       etcd-addons-081397                        kube-system
	
	
	==> coredns [079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56] <==
	[INFO] 10.244.0.10:45181 - 15776 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000176017s
	[INFO] 10.244.0.10:44244 - 3184 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000187992s
	[INFO] 10.244.0.10:44244 - 31001 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000148331s
	[INFO] 10.244.0.10:44244 - 16540 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00010759s
	[INFO] 10.244.0.10:44244 - 56307 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122122s
	[INFO] 10.244.0.10:44244 - 14875 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000120498s
	[INFO] 10.244.0.10:44244 - 19118 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000126887s
	[INFO] 10.244.0.10:44244 - 30910 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000130372s
	[INFO] 10.244.0.10:44244 - 60699 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000148485s
	[INFO] 10.244.0.10:40292 - 29736 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000202931s
	[INFO] 10.244.0.10:40292 - 20579 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000183033s
	[INFO] 10.244.0.10:40292 - 60537 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000096236s
	[INFO] 10.244.0.10:40292 - 8607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000092518s
	[INFO] 10.244.0.10:40292 - 34179 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000197746s
	[INFO] 10.244.0.10:40292 - 36376 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000807105s
	[INFO] 10.244.0.10:40292 - 43454 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000159038s
	[INFO] 10.244.0.10:40292 - 17865 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000211342s
	[INFO] 10.244.0.10:50195 - 39305 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000178908s
	[INFO] 10.244.0.10:50195 - 42468 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001523275s
	[INFO] 10.244.0.10:50195 - 10261 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000153487s
	[INFO] 10.244.0.10:50195 - 29447 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000168551s
	[INFO] 10.244.0.10:50195 - 13715 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000066625s
	[INFO] 10.244.0.10:50195 - 44605 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000069693s
	[INFO] 10.244.0.10:50195 - 48919 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000209026s
	[INFO] 10.244.0.10:50195 - 8236 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000183711s
	
	
	==> describe nodes <==
	Name:               addons-081397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-081397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-081397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-081397
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-081397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:09:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:06:56 +0000   Thu, 11 Dec 2025 23:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-081397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	System Info:
	  Machine ID:                 132f08c043de4a3fabcb9cf58535d902
	  System UUID:                132f08c0-43de-4a3f-abcb-9cf58535d902
	  Boot ID:                    7a0deef8-e8c7-4912-a254-b2bd4a5f2873
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     hello-world-app-5d498dc89-gqw57                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 amd-gpu-device-plugin-djxv6                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-prc7f                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-081397                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-081397                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-081397                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jwqpk                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-081397                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-6b586f9694-f9q5b                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-creds-764b6fb674-fn77c                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-fdnc8                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  local-path-storage          local-path-provisioner-648f6765c9-fpbst                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-081397 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-081397 event: Registered Node addons-081397 in Controller
	
	
	==> dmesg <==
	[Dec12 00:00] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.217370] kauditd_printk_skb: 65 callbacks suppressed
	[  +8.838372] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.897990] kauditd_printk_skb: 38 callbacks suppressed
	[ +21.981224] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 00:02] kauditd_printk_skb: 20 callbacks suppressed
	[Dec12 00:03] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.025958] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.337280] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.693930] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.887520] kauditd_printk_skb: 43 callbacks suppressed
	[  +1.616504] kauditd_printk_skb: 83 callbacks suppressed
	[Dec12 00:04] kauditd_printk_skb: 89 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.912354] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.453375] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 11 callbacks suppressed
	[Dec12 00:06] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.863558] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.805188] kauditd_printk_skb: 27 callbacks suppressed
	[  +0.008747] kauditd_printk_skb: 53 callbacks suppressed
	[Dec12 00:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 13 callbacks suppressed
	[Dec12 00:08] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529] <==
	{"level":"info","ts":"2025-12-11T23:57:54.563834Z","caller":"traceutil/trace.go:172","msg":"trace[62183190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.045886ms","start":"2025-12-11T23:57:54.444782Z","end":"2025-12-11T23:57:54.563827Z","steps":["trace[62183190] 'agreement among raft nodes before linearized reading'  (duration: 118.982426ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:57:54.564190Z","caller":"traceutil/trace.go:172","msg":"trace[2039299796] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"179.02635ms","start":"2025-12-11T23:57:54.385155Z","end":"2025-12-11T23:57:54.564182Z","steps":["trace[2039299796] 'process raft request'  (duration: 178.524709ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:57:54.565247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.428918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:57:54.565413Z","caller":"traceutil/trace.go:172","msg":"trace[222868242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.534642ms","start":"2025-12-11T23:57:54.445807Z","end":"2025-12-11T23:57:54.565342Z","steps":["trace[222868242] 'agreement among raft nodes before linearized reading'  (duration: 119.367809ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.552158Z","caller":"traceutil/trace.go:172","msg":"trace[1638119342] linearizableReadLoop","detail":"{readStateIndex:1095; appliedIndex:1096; }","duration":"156.418496ms","start":"2025-12-11T23:58:03.395726Z","end":"2025-12-11T23:58:03.552144Z","steps":["trace[1638119342] 'read index received'  (duration: 156.415444ms)","trace[1638119342] 'applied index is now lower than readState.Index'  (duration: 2.503µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-11T23:58:03.552301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.56477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.552320Z","caller":"traceutil/trace.go:172","msg":"trace[928892129] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1059; }","duration":"156.592939ms","start":"2025-12-11T23:58:03.395722Z","end":"2025-12-11T23:58:03.552315Z","steps":["trace[928892129] 'agreement among raft nodes before linearized reading'  (duration: 156.542706ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.554244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.397714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.555824Z","caller":"traceutil/trace.go:172","msg":"trace[949728136] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1059; }","duration":"111.983139ms","start":"2025-12-11T23:58:03.443830Z","end":"2025-12-11T23:58:03.555813Z","steps":["trace[949728136] 'agreement among raft nodes before linearized reading'  (duration: 110.370385ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.554796Z","caller":"traceutil/trace.go:172","msg":"trace[1547687040] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"112.058069ms","start":"2025-12-11T23:58:03.442727Z","end":"2025-12-11T23:58:03.554786Z","steps":["trace[1547687040] 'process raft request'  (duration: 111.966352ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.555039Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.923532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.556516Z","caller":"traceutil/trace.go:172","msg":"trace[1507526217] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"113.405565ms","start":"2025-12-11T23:58:03.443103Z","end":"2025-12-11T23:58:03.556508Z","steps":["trace[1507526217] 'agreement among raft nodes before linearized reading'  (duration: 111.826397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393302Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.001392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-12-11T23:59:39.393692Z","caller":"traceutil/trace.go:172","msg":"trace[1235881685] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1239; }","duration":"171.464156ms","start":"2025-12-11T23:59:39.222198Z","end":"2025-12-11T23:59:39.393662Z","steps":["trace[1235881685] 'range keys from in-memory index tree'  (duration: 170.767828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.832598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:59:39.393801Z","caller":"traceutil/trace.go:172","msg":"trace[1862727742] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1239; }","duration":"240.918211ms","start":"2025-12-11T23:59:39.152870Z","end":"2025-12-11T23:59:39.393789Z","steps":["trace[1862727742] 'range keys from in-memory index tree'  (duration: 240.669473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:01:18.494075Z","caller":"traceutil/trace.go:172","msg":"trace[729500464] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"106.783316ms","start":"2025-12-12T00:01:18.387266Z","end":"2025-12-12T00:01:18.494049Z","steps":["trace[729500464] 'process raft request'  (duration: 106.410306ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:02:27.300606Z","caller":"traceutil/trace.go:172","msg":"trace[636598247] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"178.765669ms","start":"2025-12-12T00:02:27.121805Z","end":"2025-12-12T00:02:27.300571Z","steps":["trace[636598247] 'process raft request'  (duration: 178.598198ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302340Z","caller":"traceutil/trace.go:172","msg":"trace[1017299845] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1944; }","duration":"211.137553ms","start":"2025-12-12T00:03:50.091151Z","end":"2025-12-12T00:03:50.302289Z","steps":["trace[1017299845] 'read index received'  (duration: 211.129428ms)","trace[1017299845] 'applied index is now lower than readState.Index'  (duration: 7.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:03:50.302716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.444735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:03:50.302750Z","caller":"traceutil/trace.go:172","msg":"trace[412680698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1831; }","duration":"211.595679ms","start":"2025-12-12T00:03:50.091146Z","end":"2025-12-12T00:03:50.302742Z","steps":["trace[412680698] 'agreement among raft nodes before linearized reading'  (duration: 211.378448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302919Z","caller":"traceutil/trace.go:172","msg":"trace[333147361] transaction","detail":"{read_only:false; response_revision:1832; number_of_response:1; }","duration":"278.806483ms","start":"2025-12-12T00:03:50.024100Z","end":"2025-12-12T00:03:50.302907Z","steps":["trace[333147361] 'process raft request'  (duration: 278.330678ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:06:28.833586Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1445}
	{"level":"info","ts":"2025-12-12T00:06:28.938406Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1445,"took":"103.595743ms","hash":3397286304,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4149248,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-12-12T00:06:28.938497Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3397286304,"revision":1445,"compact-revision":-1}
	
	
	==> kernel <==
	 00:09:02 up 13 min,  0 users,  load average: 1.95, 1.41, 1.01
	Linux addons-081397 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0] <==
	E1211 23:57:51.421606       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:51.423760       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	I1211 23:57:51.587823       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:03:28.639031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47840: use of closed network connection
	E1212 00:03:28.907630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47866: use of closed network connection
	I1212 00:03:38.672372       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.240.212"}
	I1212 00:03:52.460336       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:03:56.689684       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:03:56.942418       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.190.115"}
	I1212 00:04:08.084581       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 00:04:26.464334       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.464735       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.589812       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.589919       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.683731       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.683804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.703860       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.704083       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.747552       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.747633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 00:04:27.684485       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 00:04:27.749684       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1212 00:04:27.811321       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1212 00:06:30.996030       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:06:53.209224       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.181.204"}
	
	
	==> kube-controller-manager [7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1] <==
	E1212 00:06:18.045726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:28.574687       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:28.576064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:38.552726       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:38.554143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1212 00:06:48.988911       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E1212 00:06:49.400063       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:49.401478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1212 00:07:09.195055       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	E1212 00:07:11.884892       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:07:11.886599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:07:30.876170       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:07:30.877604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:07:34.360288       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:07:34.361732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:02.704051       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:02.706405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:05.298403       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:05.299999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:14.084834       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:14.086123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:44.822094       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:44.823361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:08:55.438710       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:08:55.440165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a] <==
	I1211 23:56:41.129554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 23:56:41.230792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 23:56:41.230832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.2"]
	E1211 23:56:41.230926       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:56:41.372420       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1211 23:56:41.372474       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:56:41.372505       1 server_linux.go:132] "Using iptables Proxier"
	I1211 23:56:41.403791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:56:41.404681       1 server.go:527] "Version info" version="v1.34.2"
	I1211 23:56:41.404798       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:56:41.409627       1 config.go:200] "Starting service config controller"
	I1211 23:56:41.409659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 23:56:41.409674       1 config.go:106] "Starting endpoint slice config controller"
	I1211 23:56:41.409677       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 23:56:41.409687       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 23:56:41.409690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 23:56:41.421538       1 config.go:309] "Starting node config controller"
	I1211 23:56:41.421577       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 23:56:41.421584       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 23:56:41.510201       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 23:56:41.510238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 23:56:41.510294       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379] <==
	E1211 23:56:31.058088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:31.058174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:31.058339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.058520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:31.058583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.878612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1211 23:56:31.916337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.929867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.934421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:31.956823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1211 23:56:31.994674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:32.004329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1211 23:56:32.010178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:32.026980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1211 23:56:32.052788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1211 23:56:32.154842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1211 23:56:32.220469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:32.267618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1211 23:56:32.308064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1211 23:56:32.344466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1211 23:56:32.371737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1211 23:56:32.397888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:32.548714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1211 23:56:32.628885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1211 23:56:34.946153       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:08:05 addons-081397 kubelet[1522]: E1212 00:08:05.259284    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498085254026621 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:05 addons-081397 kubelet[1522]: E1212 00:08:05.259361    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498085254026621 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:06 addons-081397 kubelet[1522]: I1212 00:08:06.547245    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-fdnc8" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:08:14 addons-081397 kubelet[1522]: I1212 00:08:14.551380    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-djxv6" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:08:15 addons-081397 kubelet[1522]: E1212 00:08:15.262398    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498095261855793 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:15 addons-081397 kubelet[1522]: E1212 00:08:15.262440    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498095261855793 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:16 addons-081397 kubelet[1522]: E1212 00:08:16.440822    1522 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Dec 12 00:08:16 addons-081397 kubelet[1522]: E1212 00:08:16.440878    1522 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Dec 12 00:08:16 addons-081397 kubelet[1522]: E1212 00:08:16.441338    1522 kuberuntime_manager.go:1449] "Unhandled Error" err="container hello-world-app start failed in pod hello-world-app-5d498dc89-gqw57_default(fa636fe7-3020-41c2-8bcf-0efb5485419e): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:08:16 addons-081397 kubelet[1522]: E1212 00:08:16.441382    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-gqw57" podUID="fa636fe7-3020-41c2-8bcf-0efb5485419e"
	Dec 12 00:08:17 addons-081397 kubelet[1522]: E1212 00:08:17.408913    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-gqw57" podUID="fa636fe7-3020-41c2-8bcf-0efb5485419e"
	Dec 12 00:08:19 addons-081397 kubelet[1522]: I1212 00:08:19.546911    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:08:25 addons-081397 kubelet[1522]: E1212 00:08:25.265811    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498105265263472 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:25 addons-081397 kubelet[1522]: E1212 00:08:25.266239    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498105265263472 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:35 addons-081397 kubelet[1522]: E1212 00:08:35.269857    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498115269052785 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:35 addons-081397 kubelet[1522]: E1212 00:08:35.269913    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498115269052785 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:45 addons-081397 kubelet[1522]: E1212 00:08:45.275409    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498125274266723 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:45 addons-081397 kubelet[1522]: E1212 00:08:45.275469    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498125274266723 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:46 addons-081397 kubelet[1522]: E1212 00:08:46.552684    1522 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 12 00:08:46 addons-081397 kubelet[1522]: E1212 00:08:46.552745    1522 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 12 00:08:46 addons-081397 kubelet[1522]: E1212 00:08:46.553039    1522 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc_local-path-storage(f566ec94-0a37-4026-bd83-bbb9447029b7): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:08:46 addons-081397 kubelet[1522]: E1212 00:08:46.553074    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc" podUID="f566ec94-0a37-4026-bd83-bbb9447029b7"
	Dec 12 00:08:46 addons-081397 kubelet[1522]: E1212 00:08:46.584064    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc" podUID="f566ec94-0a37-4026-bd83-bbb9447029b7"
	Dec 12 00:08:55 addons-081397 kubelet[1522]: E1212 00:08:55.279031    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765498135278406855 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:08:55 addons-081397 kubelet[1522]: E1212 00:08:55.279200    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765498135278406855 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	
	
	==> storage-provisioner [636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529] <==
	W1212 00:08:36.650582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:38.656072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:38.669364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:40.673312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:40.681217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:42.685478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:42.698734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:44.707078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:44.716192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:46.722386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:46.730647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:48.735270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:48.742990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:50.747446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:50.753727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:52.759470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:52.769024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:54.773599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:54.781771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:56.786621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:56.794642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:58.799635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:08:58.808631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:00.813349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:09:00.820326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-081397 -n addons-081397
helpers_test.go:270: (dbg) Run:  kubectl --context addons-081397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc: exit status 1 (112.44423ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-gqw57
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-081397/192.168.39.2
	Start Time:       Fri, 12 Dec 2025 00:06:53 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rk5gl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rk5gl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m10s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-gqw57 to addons-081397
	  Warning  Failed     47s                  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     47s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    46s                  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     46s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x2 over 2m10s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjvf5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sjvf5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-6b586f9694-f9q5b" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-fn77c" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-081397 describe pod hello-world-app-5d498dc89-gqw57 test-local-path registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (2m5.39074551s)
--- FAIL: TestAddons/parallel/LocalPath (428.74s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (129.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-zbhhw" [0d366411-2739-4499-8990-e9c2c974d30b] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:338: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1049: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-081397 -n addons-081397
addons_test.go:1049: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-12-12 00:06:34.810278114 +0000 UTC m=+659.519911100
addons_test.go:1049: (dbg) Run:  kubectl --context addons-081397 describe po yakd-dashboard-5ff678cb9-zbhhw -n yakd-dashboard
addons_test.go:1049: (dbg) kubectl --context addons-081397 describe po yakd-dashboard-5ff678cb9-zbhhw -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-zbhhw
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-081397/192.168.39.2
Start Time:       Thu, 11 Dec 2025 23:56:49 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-zbhhw (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vv88f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vv88f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m45s                  default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-zbhhw to addons-081397
Warning  Failed     5m54s (x2 over 7m35s)  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    107s (x5 over 9m38s)   kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     63s (x5 over 7m35s)    kubelet            Error: ErrImagePull
Warning  Failed     63s (x3 over 4m39s)    kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    11s (x15 over 7m34s)   kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     11s (x15 over 7m34s)   kubelet            Error: ImagePullBackOff
addons_test.go:1049: (dbg) Run:  kubectl --context addons-081397 logs yakd-dashboard-5ff678cb9-zbhhw -n yakd-dashboard
addons_test.go:1049: (dbg) Non-zero exit: kubectl --context addons-081397 logs yakd-dashboard-5ff678cb9-zbhhw -n yakd-dashboard: exit status 1 (105.290831ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-zbhhw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1049: kubectl --context addons-081397 logs yakd-dashboard-5ff678cb9-zbhhw -n yakd-dashboard: exit status 1
addons_test.go:1050: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-081397 -n addons-081397
helpers_test.go:253: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 logs -n 25: (1.82374077s)
helpers_test.go:261: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-449217 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-449217                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-859495 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-859495                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-525167                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-449217                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-859495                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ --download-only -p binary-mirror-928519 --alsologtostderr --binary-mirror http://127.0.0.1:46143 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ -p binary-mirror-928519                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-928519 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ addons  │ enable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ start   │ -p addons-081397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-081397        │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:02 UTC │ 12 Dec 25 00:02 UTC │
	│ addons  │ addons-081397 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ enable headlamp -p addons-081397 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-081397                                                                                                                                                                                                                                                                                                                                                                                         │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:03 UTC │
	│ addons  │ addons-081397 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:03 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ addons  │ addons-081397 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │ 12 Dec 25 00:04 UTC │
	│ ssh     │ addons-081397 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-081397        │ jenkins │ v1.37.0 │ 12 Dec 25 00:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:51.508824  191080 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:51.508961  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.508968  191080 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:51.508973  191080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:51.509212  191080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1211 23:55:51.509810  191080 out.go:368] Setting JSON to false
	I1211 23:55:51.510832  191080 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20296,"bootTime":1765477056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:51.510906  191080 start.go:143] virtualization: kvm guest
	I1211 23:55:51.512916  191080 out.go:179] * [addons-081397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:51.514286  191080 notify.go:221] Checking for updates...
	I1211 23:55:51.514305  191080 out.go:179]   - MINIKUBE_LOCATION=22101
	I1211 23:55:51.515624  191080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:51.517281  191080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:51.518706  191080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.520288  191080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:55:51.521862  191080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:55:51.523574  191080 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:51.556952  191080 out.go:179] * Using the kvm2 driver based on user configuration
	I1211 23:55:51.558571  191080 start.go:309] selected driver: kvm2
	I1211 23:55:51.558600  191080 start.go:927] validating driver "kvm2" against <nil>
	I1211 23:55:51.558629  191080 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:55:51.559389  191080 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:51.559736  191080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:55:51.559767  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:55:51.559823  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:55:51.559835  191080 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:51.559888  191080 start.go:353] cluster config:
	{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1211 23:55:51.560015  191080 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:51.561727  191080 out.go:179] * Starting "addons-081397" primary control-plane node in "addons-081397" cluster
	I1211 23:55:51.563063  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:55:51.563108  191080 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:51.563116  191080 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:51.563256  191080 preload.go:238] Found /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:55:51.563274  191080 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 23:55:51.563705  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:55:51.563732  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json: {Name:mk3f56184a595aa65236de2721f264b9d77bbfd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:55:51.563928  191080 start.go:360] acquireMachinesLock for addons-081397: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:55:51.564001  191080 start.go:364] duration metric: took 52.499µs to acquireMachinesLock for "addons-081397"
	I1211 23:55:51.564027  191080 start.go:93] Provisioning new machine with config: &{Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:55:51.564111  191080 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:55:51.566772  191080 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1211 23:55:51.567024  191080 start.go:159] libmachine.API.Create for "addons-081397" (driver="kvm2")
	I1211 23:55:51.567078  191080 client.go:173] LocalClient.Create starting
	I1211 23:55:51.567214  191080 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem
	I1211 23:55:51.634646  191080 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem
	I1211 23:55:51.761850  191080 main.go:143] libmachine: creating domain...
	I1211 23:55:51.761879  191080 main.go:143] libmachine: creating network...
	I1211 23:55:51.763511  191080 main.go:143] libmachine: found existing default network
	I1211 23:55:51.763716  191080 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.764419  191080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dae890}
	I1211 23:55:51.764553  191080 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-081397</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.771343  191080 main.go:143] libmachine: creating private network mk-addons-081397 192.168.39.0/24...
	I1211 23:55:51.876571  191080 main.go:143] libmachine: private network mk-addons-081397 192.168.39.0/24 created
	I1211 23:55:51.876999  191080 main.go:143] libmachine: <network>
	  <name>mk-addons-081397</name>
	  <uuid>f81ed5cb-0804-4477-9781-0372afa282e4</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:59:29:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1211 23:55:51.877044  191080 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:51.877068  191080 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1211 23:55:51.877078  191080 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:51.877153  191080 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22101-186349/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1211 23:55:52.159080  191080 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa...
	I1211 23:55:52.239938  191080 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk...
	I1211 23:55:52.239993  191080 main.go:143] libmachine: Writing magic tar header
	I1211 23:55:52.240026  191080 main.go:143] libmachine: Writing SSH key tar header
	I1211 23:55:52.240106  191080 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 ...
	I1211 23:55:52.240169  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397
	I1211 23:55:52.240206  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397 (perms=drwx------)
	I1211 23:55:52.240215  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube/machines
	I1211 23:55:52.240224  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:55:52.240232  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:52.240240  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349/.minikube (perms=drwxr-xr-x)
	I1211 23:55:52.240250  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22101-186349
	I1211 23:55:52.240258  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22101-186349 (perms=drwxrwxr-x)
	I1211 23:55:52.240268  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:55:52.240275  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:55:52.240283  191080 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1211 23:55:52.240291  191080 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:55:52.240299  191080 main.go:143] libmachine: checking permissions on dir: /home
	I1211 23:55:52.240306  191080 main.go:143] libmachine: skipping /home - not owner
	I1211 23:55:52.240309  191080 main.go:143] libmachine: defining domain...
	I1211 23:55:52.242720  191080 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:52.249320  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:07:bd:c2 in network default
	I1211 23:55:52.250641  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:52.250680  191080 main.go:143] libmachine: starting domain...
	I1211 23:55:52.250686  191080 main.go:143] libmachine: ensuring networks are active...
	I1211 23:55:52.252166  191080 main.go:143] libmachine: Ensuring network default is active
	I1211 23:55:52.253166  191080 main.go:143] libmachine: Ensuring network mk-addons-081397 is active
	I1211 23:55:52.254226  191080 main.go:143] libmachine: getting domain XML...
	I1211 23:55:52.255944  191080 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-081397</name>
	  <uuid>132f08c0-43de-4a3f-abcb-9cf58535d902</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/addons-081397.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2b:32:89'/>
	      <source network='mk-addons-081397'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:07:bd:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1211 23:55:53.688550  191080 main.go:143] libmachine: waiting for domain to start...
	I1211 23:55:53.691114  191080 main.go:143] libmachine: domain is now running
	I1211 23:55:53.691144  191080 main.go:143] libmachine: waiting for IP...
	I1211 23:55:53.692424  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.693801  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.693826  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.694334  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.694402  191080 retry.go:31] will retry after 260.574844ms: waiting for domain to come up
	I1211 23:55:53.957397  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:53.958627  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:53.958657  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:53.959170  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:53.959230  191080 retry.go:31] will retry after 343.725464ms: waiting for domain to come up
	I1211 23:55:54.305232  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.306166  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.306193  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.306730  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.306782  191080 retry.go:31] will retry after 478.083756ms: waiting for domain to come up
	I1211 23:55:54.787051  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:54.788263  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:54.788294  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:54.788968  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:54.789021  191080 retry.go:31] will retry after 586.83961ms: waiting for domain to come up
	I1211 23:55:55.378616  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:55.379761  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:55.379794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:55.380438  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:55.380514  191080 retry.go:31] will retry after 629.739442ms: waiting for domain to come up
	I1211 23:55:56.011678  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.012771  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.012794  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.013869  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.013951  191080 retry.go:31] will retry after 838.290437ms: waiting for domain to come up
	I1211 23:55:56.853752  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:56.854450  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:56.854485  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:56.854918  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:56.854979  191080 retry.go:31] will retry after 1.020736825s: waiting for domain to come up
	I1211 23:55:57.877350  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:57.878104  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:57.878134  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:57.878522  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:57.878563  191080 retry.go:31] will retry after 1.394206578s: waiting for domain to come up
	I1211 23:55:59.275153  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:55:59.276377  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:55:59.276409  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:55:59.276994  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:55:59.277049  191080 retry.go:31] will retry after 1.4774988s: waiting for domain to come up
	I1211 23:56:00.757189  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:00.758049  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:00.758071  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:00.758450  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:00.758518  191080 retry.go:31] will retry after 1.704024367s: waiting for domain to come up
	I1211 23:56:02.464578  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:02.465672  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:02.465713  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:02.466390  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:02.466496  191080 retry.go:31] will retry after 2.558039009s: waiting for domain to come up
	I1211 23:56:05.028156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:05.029424  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:05.029476  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:05.030141  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:05.030218  191080 retry.go:31] will retry after 2.713185396s: waiting for domain to come up
	I1211 23:56:07.745837  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:07.746810  191080 main.go:143] libmachine: no network interface addresses found for domain addons-081397 (source=lease)
	I1211 23:56:07.746835  191080 main.go:143] libmachine: trying to list again with source=arp
	I1211 23:56:07.747308  191080 main.go:143] libmachine: unable to find current IP address of domain addons-081397 in network mk-addons-081397 (interfaces detected: [])
	I1211 23:56:07.747359  191080 retry.go:31] will retry after 3.017005916s: waiting for domain to come up
	I1211 23:56:10.768106  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769156  191080 main.go:143] libmachine: domain addons-081397 has current primary IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:10.769185  191080 main.go:143] libmachine: found domain IP: 192.168.39.2
	I1211 23:56:10.769196  191080 main.go:143] libmachine: reserving static IP address...
	I1211 23:56:10.769843  191080 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-081397", mac: "52:54:00:2b:32:89", ip: "192.168.39.2"} in network mk-addons-081397
	I1211 23:56:11.003302  191080 main.go:143] libmachine: reserved static IP address 192.168.39.2 for domain addons-081397
	I1211 23:56:11.003331  191080 main.go:143] libmachine: waiting for SSH...
	I1211 23:56:11.003337  191080 main.go:143] libmachine: Getting to WaitForSSH function...
	I1211 23:56:11.008569  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009090  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.009115  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.009350  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.009619  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.009631  191080 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1211 23:56:11.126360  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.126895  191080 main.go:143] libmachine: domain creation complete
	I1211 23:56:11.129784  191080 machine.go:94] provisionDockerMachine start ...
	I1211 23:56:11.134589  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.135537  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.135574  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.136010  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.136277  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.136290  191080 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 23:56:11.257254  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1211 23:56:11.257302  191080 buildroot.go:166] provisioning hostname "addons-081397"
	I1211 23:56:11.261573  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262389  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.262457  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.262926  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.263212  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.263234  191080 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-081397 && echo "addons-081397" | sudo tee /etc/hostname
	I1211 23:56:11.410142  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-081397
	
	I1211 23:56:11.414271  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.414882  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.414917  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.415210  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.415441  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.415482  191080 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-081397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-081397/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-081397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:56:11.555358  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:56:11.555395  191080 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22101-186349/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-186349/.minikube}
	I1211 23:56:11.555420  191080 buildroot.go:174] setting up certificates
	I1211 23:56:11.555443  191080 provision.go:84] configureAuth start
	I1211 23:56:11.558885  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.559509  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.559565  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.562716  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563314  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.563346  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.563750  191080 provision.go:143] copyHostCerts
	I1211 23:56:11.563901  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem (1123 bytes)
	I1211 23:56:11.564087  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem (1675 bytes)
	I1211 23:56:11.564163  191080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem (1082 bytes)
	I1211 23:56:11.564231  191080 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem org=jenkins.addons-081397 san=[127.0.0.1 192.168.39.2 addons-081397 localhost minikube]
	I1211 23:56:11.604096  191080 provision.go:177] copyRemoteCerts
	I1211 23:56:11.604171  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:56:11.607337  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.607977  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.608015  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.608218  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:11.699591  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:56:11.739646  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:56:11.780870  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:56:11.821711  191080 provision.go:87] duration metric: took 266.231617ms to configureAuth
	I1211 23:56:11.821755  191080 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:56:11.822007  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:11.826045  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:11.826578  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:11.826785  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:11.827068  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:11.827088  191080 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:56:12.345303  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:56:12.345334  191080 machine.go:97] duration metric: took 1.2155135s to provisionDockerMachine
	I1211 23:56:12.345348  191080 client.go:176] duration metric: took 20.778259004s to LocalClient.Create
	I1211 23:56:12.345369  191080 start.go:167] duration metric: took 20.77834555s to libmachine.API.Create "addons-081397"
	I1211 23:56:12.345379  191080 start.go:293] postStartSetup for "addons-081397" (driver="kvm2")
	I1211 23:56:12.345393  191080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:56:12.345498  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:56:12.350156  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351165  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.351226  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.351544  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.444149  191080 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:56:12.450354  191080 info.go:137] Remote host: Buildroot 2025.02
	I1211 23:56:12.450386  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/addons for local assets ...
	I1211 23:56:12.450452  191080 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/files for local assets ...
	I1211 23:56:12.450508  191080 start.go:296] duration metric: took 105.122285ms for postStartSetup
	I1211 23:56:12.489061  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.489811  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.489855  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.490235  191080 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/config.json ...
	I1211 23:56:12.490597  191080 start.go:128] duration metric: took 20.9264692s to createHost
	I1211 23:56:12.493999  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494451  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.494490  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.494674  191080 main.go:143] libmachine: Using SSH client type: native
	I1211 23:56:12.494897  191080 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1211 23:56:12.494909  191080 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:56:12.615405  191080 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765497372.576443288
	
	I1211 23:56:12.615439  191080 fix.go:216] guest clock: 1765497372.576443288
	I1211 23:56:12.615447  191080 fix.go:229] Guest: 2025-12-11 23:56:12.576443288 +0000 UTC Remote: 2025-12-11 23:56:12.490625673 +0000 UTC m=+21.040527790 (delta=85.817615ms)
	I1211 23:56:12.615500  191080 fix.go:200] guest clock delta is within tolerance: 85.817615ms
	I1211 23:56:12.615508  191080 start.go:83] releasing machines lock for "addons-081397", held for 21.051491664s
	I1211 23:56:12.619172  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.619799  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.619831  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.620772  191080 ssh_runner.go:195] Run: cat /version.json
	I1211 23:56:12.620876  191080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:56:12.625375  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.625530  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626036  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626063  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626330  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:12.626345  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.626381  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:12.626618  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:12.717381  191080 ssh_runner.go:195] Run: systemctl --version
	I1211 23:56:12.749852  191080 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:56:13.078529  191080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:56:13.088885  191080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:56:13.089007  191080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:56:13.118717  191080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:56:13.118763  191080 start.go:496] detecting cgroup driver to use...
	I1211 23:56:13.118864  191080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:56:13.148400  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:56:13.169798  191080 docker.go:218] disabling cri-docker service (if available) ...
	I1211 23:56:13.169888  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:56:13.191896  191080 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:56:13.211802  191080 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:56:13.376765  191080 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:56:13.606305  191080 docker.go:234] disabling docker service ...
	I1211 23:56:13.606403  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:56:13.625180  191080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:56:13.643232  191080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:56:13.829218  191080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:56:14.000354  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:56:14.021612  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:56:14.050867  191080 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 23:56:14.050963  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.068612  191080 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:56:14.068701  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.086254  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.104697  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.123074  191080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:56:14.143227  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.161079  191080 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.188908  191080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:56:14.207821  191080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:56:14.223124  191080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:56:14.223216  191080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:56:14.252980  191080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:56:14.270522  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:14.430888  191080 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:56:14.564516  191080 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:56:14.564671  191080 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:56:14.574658  191080 start.go:564] Will wait 60s for crictl version
	I1211 23:56:14.574811  191080 ssh_runner.go:195] Run: which crictl
	I1211 23:56:14.580945  191080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:56:14.633033  191080 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:56:14.633155  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.669436  191080 ssh_runner.go:195] Run: crio --version
	I1211 23:56:14.710252  191080 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1211 23:56:14.715883  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716478  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:14.716519  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:14.716765  191080 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:56:14.724237  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:14.744504  191080 kubeadm.go:884] updating cluster {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:56:14.744646  191080 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 23:56:14.744696  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:14.782232  191080 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1211 23:56:14.782317  191080 ssh_runner.go:195] Run: which lz4
	I1211 23:56:14.788630  191080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:56:14.795116  191080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:56:14.795159  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1211 23:56:16.445424  191080 crio.go:462] duration metric: took 1.656827131s to copy over tarball
	I1211 23:56:16.445532  191080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:56:18.102205  191080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.656625041s)
	I1211 23:56:18.102245  191080 crio.go:469] duration metric: took 1.656768065s to extract the tarball
	I1211 23:56:18.102258  191080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:56:18.141443  191080 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:56:18.189200  191080 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:56:18.189229  191080 cache_images.go:86] Images are preloaded, skipping loading
	I1211 23:56:18.189239  191080 kubeadm.go:935] updating node { 192.168.39.2 8443 v1.34.2 crio true true} ...
	I1211 23:56:18.189344  191080 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-081397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:56:18.189436  191080 ssh_runner.go:195] Run: crio config
	I1211 23:56:18.243325  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:18.243368  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:18.243392  191080 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 23:56:18.243429  191080 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-081397 NodeName:addons-081397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:56:18.243664  191080 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-081397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:56:18.243802  191080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 23:56:18.259378  191080 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 23:56:18.259504  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:56:18.274263  191080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1211 23:56:18.301193  191080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:56:18.326928  191080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1211 23:56:18.352300  191080 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:56:18.358187  191080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:56:18.378953  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:18.546541  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:18.581301  191080 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397 for IP: 192.168.39.2
	I1211 23:56:18.581326  191080 certs.go:195] generating shared ca certs ...
	I1211 23:56:18.581346  191080 certs.go:227] acquiring lock for ca certs: {Name:mkdc58adfd2cc299a76aeec81ac0d7f7d2a38e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.581537  191080 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key
	I1211 23:56:18.667363  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt ...
	I1211 23:56:18.667401  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt: {Name:mk1b55f33c9202ab57b68cfcba7feed18a5c869b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667594  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key ...
	I1211 23:56:18.667607  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key: {Name:mk31aac21dc0da02b77cc3d7268007e3ddde417b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.667688  191080 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key
	I1211 23:56:18.787173  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt ...
	I1211 23:56:18.787207  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt: {Name:mk50e6f78e87c39b691065db3fbc22d4178cbab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787389  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key ...
	I1211 23:56:18.787400  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key: {Name:mk3201307c9797e697c52cf7944b78460ad79885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.787484  191080 certs.go:257] generating profile certs ...
	I1211 23:56:18.787545  191080 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key
	I1211 23:56:18.787567  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt with IP's: []
	I1211 23:56:18.836629  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt ...
	I1211 23:56:18.836666  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: {Name:mk4cd9c65ec1631677a6989710916cca92666039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.836848  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key ...
	I1211 23:56:18.836869  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.key: {Name:mk158319f878ba2a2974fa05c9c5e81406b1ff04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.837128  191080 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68
	I1211 23:56:18.837174  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I1211 23:56:18.895323  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 ...
	I1211 23:56:18.895360  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68: {Name:mka19cf3aa517a67c9823b9db6a0564ae2c88f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895568  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 ...
	I1211 23:56:18.895582  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68: {Name:mkcb32c8b3892cdbb32375c99cf73efb7e2d2ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:18.895669  191080 certs.go:382] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt
	I1211 23:56:18.895740  191080 certs.go:386] copying /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key.866ccc68 -> /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key
	I1211 23:56:18.895792  191080 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key
	I1211 23:56:18.895810  191080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt with IP's: []
	I1211 23:56:19.059957  191080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt ...
	I1211 23:56:19.059996  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt: {Name:mkeece2e2a9106cbaddd7935ae5c93b8b6536c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060202  191080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key ...
	I1211 23:56:19.060217  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key: {Name:mk7fa3201305a84265a30d592c7bfaa4ea9d3d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:19.060422  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 23:56:19.060478  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:56:19.060506  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:56:19.060532  191080 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem (1675 bytes)
	I1211 23:56:19.061341  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:56:19.104179  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:56:19.148345  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:56:19.191324  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 23:56:19.230603  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:56:19.274335  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:56:19.314103  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:56:19.355420  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:56:19.392791  191080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:56:19.429841  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:56:19.455328  191080 ssh_runner.go:195] Run: openssl version
	I1211 23:56:19.463919  191080 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.478287  191080 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 23:56:19.494141  191080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501262  191080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.501357  191080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:56:19.511987  191080 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 23:56:19.527366  191080 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 23:56:19.544629  191080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:56:19.551139  191080 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:56:19.551211  191080 kubeadm.go:401] StartCluster: {Name:addons-081397 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-081397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:56:19.551367  191080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:56:19.551501  191080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:56:19.601329  191080 cri.go:89] found id: ""
	I1211 23:56:19.601414  191080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:56:19.615890  191080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:56:19.632616  191080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:56:19.646731  191080 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:56:19.646765  191080 kubeadm.go:158] found existing configuration files:
	
	I1211 23:56:19.646828  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:56:19.660106  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:56:19.660190  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:56:19.676276  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:56:19.690027  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:56:19.690116  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:56:19.705756  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.720625  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:56:19.720715  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:56:19.735359  191080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:56:19.750390  191080 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:56:19.750481  191080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:56:19.766951  191080 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:56:19.839756  191080 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 23:56:19.839847  191080 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 23:56:19.990602  191080 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:56:19.990863  191080 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:56:19.991043  191080 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:56:20.010193  191080 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:56:20.165972  191080 out.go:252]   - Generating certificates and keys ...
	I1211 23:56:20.166144  191080 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 23:56:20.166252  191080 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 23:56:20.166347  191080 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:56:20.551090  191080 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:56:20.773761  191080 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:56:21.138092  191080 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 23:56:21.423874  191080 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 23:56:21.424042  191080 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:21.781372  191080 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 23:56:21.781631  191080 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-081397 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1211 23:56:22.783972  191080 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:56:22.973180  191080 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:56:23.396371  191080 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 23:56:23.396644  191080 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:56:23.822810  191080 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:56:24.134647  191080 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:56:24.293087  191080 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:56:24.542047  191080 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:56:24.865144  191080 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:56:24.865682  191080 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:56:24.869746  191080 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:56:24.871219  191080 out.go:252]   - Booting up control plane ...
	I1211 23:56:24.871351  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:56:24.871523  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:56:24.871597  191080 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:56:24.889102  191080 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:56:24.889275  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 23:56:24.898513  191080 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 23:56:24.899113  191080 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:56:24.899188  191080 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 23:56:25.090240  191080 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:56:25.090397  191080 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:56:26.591737  191080 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502403531s
	I1211 23:56:26.595003  191080 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1211 23:56:26.595170  191080 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.2:8443/livez
	I1211 23:56:26.595328  191080 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1211 23:56:26.595488  191080 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1211 23:56:29.712995  191080 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.118803589s
	I1211 23:56:31.068676  191080 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.475444759s
	I1211 23:56:33.595001  191080 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002476016s
	I1211 23:56:33.626020  191080 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:56:33.642768  191080 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:56:33.672411  191080 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:56:33.672732  191080 kubeadm.go:319] [mark-control-plane] Marking the node addons-081397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:56:33.697567  191080 kubeadm.go:319] [bootstrap-token] Using token: fx6xk6.14clsj7mtuippxxx
	I1211 23:56:33.699696  191080 out.go:252]   - Configuring RBAC rules ...
	I1211 23:56:33.699861  191080 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:56:33.705146  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:56:33.724431  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:56:33.735134  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:56:33.742267  191080 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:56:33.751087  191080 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:56:34.005984  191080 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:56:34.545250  191080 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1211 23:56:35.004202  191080 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1211 23:56:35.005119  191080 kubeadm.go:319] 
	I1211 23:56:35.005179  191080 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1211 23:56:35.005184  191080 kubeadm.go:319] 
	I1211 23:56:35.005261  191080 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1211 23:56:35.005268  191080 kubeadm.go:319] 
	I1211 23:56:35.005289  191080 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1211 23:56:35.005347  191080 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:56:35.005431  191080 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:56:35.005483  191080 kubeadm.go:319] 
	I1211 23:56:35.005568  191080 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1211 23:56:35.005579  191080 kubeadm.go:319] 
	I1211 23:56:35.005647  191080 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:56:35.005662  191080 kubeadm.go:319] 
	I1211 23:56:35.005707  191080 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1211 23:56:35.005772  191080 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:56:35.005838  191080 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:56:35.005844  191080 kubeadm.go:319] 
	I1211 23:56:35.005915  191080 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:56:35.005983  191080 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1211 23:56:35.005989  191080 kubeadm.go:319] 
	I1211 23:56:35.006133  191080 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006283  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae \
	I1211 23:56:35.006317  191080 kubeadm.go:319] 	--control-plane 
	I1211 23:56:35.006322  191080 kubeadm.go:319] 
	I1211 23:56:35.006403  191080 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:56:35.006410  191080 kubeadm.go:319] 
	I1211 23:56:35.006504  191080 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fx6xk6.14clsj7mtuippxxx \
	I1211 23:56:35.006639  191080 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c0b88820597315620ec0510f9ac83d55213c46f15e2d7641e43c80784b0671ae 
	I1211 23:56:35.009065  191080 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:56:35.009128  191080 cni.go:84] Creating CNI manager for ""
	I1211 23:56:35.009169  191080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:56:35.012077  191080 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 23:56:35.013875  191080 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 23:56:35.030825  191080 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 23:56:35.061826  191080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:56:35.061965  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.061967  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-081397 minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-081397 minikube.k8s.io/primary=true
	I1211 23:56:35.142016  191080 ops.go:34] apiserver oom_adj: -16
	I1211 23:56:35.257509  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:35.758327  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.257620  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:36.757733  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.258377  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:37.758134  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.258440  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:38.758050  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.258437  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:39.757704  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.258657  191080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:56:40.495051  191080 kubeadm.go:1114] duration metric: took 5.433189491s to wait for elevateKubeSystemPrivileges
	I1211 23:56:40.495110  191080 kubeadm.go:403] duration metric: took 20.943905559s to StartCluster
	I1211 23:56:40.495141  191080 settings.go:142] acquiring lock: {Name:mkc54bc00cde7f692cc672e67ab0af4ae6a15c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.495326  191080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:56:40.495951  191080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/kubeconfig: {Name:mkdf9d6588b522077beb3bc03f9eff4a2b248de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:56:40.496234  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:56:40.496280  191080 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:56:40.496340  191080 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:56:40.496488  191080 addons.go:70] Setting yakd=true in profile "addons-081397"
	I1211 23:56:40.496513  191080 addons.go:239] Setting addon yakd=true in "addons-081397"
	I1211 23:56:40.496519  191080 addons.go:70] Setting inspektor-gadget=true in profile "addons-081397"
	I1211 23:56:40.496555  191080 addons.go:239] Setting addon inspektor-gadget=true in "addons-081397"
	I1211 23:56:40.496571  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496571  191080 addons.go:70] Setting ingress=true in profile "addons-081397"
	I1211 23:56:40.496589  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.496605  191080 addons.go:239] Setting addon ingress=true in "addons-081397"
	I1211 23:56:40.496607  191080 addons.go:70] Setting metrics-server=true in profile "addons-081397"
	I1211 23:56:40.496619  191080 addons.go:70] Setting ingress-dns=true in profile "addons-081397"
	I1211 23:56:40.496623  191080 addons.go:239] Setting addon metrics-server=true in "addons-081397"
	I1211 23:56:40.496630  191080 addons.go:70] Setting cloud-spanner=true in profile "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting registry-creds=true in profile "addons-081397"
	I1211 23:56:40.496643  191080 addons.go:70] Setting gcp-auth=true in profile "addons-081397"
	I1211 23:56:40.496649  191080 addons.go:239] Setting addon cloud-spanner=true in "addons-081397"
	I1211 23:56:40.496652  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496658  191080 addons.go:239] Setting addon registry-creds=true in "addons-081397"
	I1211 23:56:40.496662  191080 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.496670  191080 mustload.go:66] Loading cluster: addons-081397
	I1211 23:56:40.496674  191080 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-081397"
	I1211 23:56:40.496687  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496694  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496707  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496846  191080 config.go:182] Loaded profile config "addons-081397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 23:56:40.497455  191080 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-081397"
	I1211 23:56:40.497568  191080 addons.go:70] Setting registry=true in profile "addons-081397"
	I1211 23:56:40.497576  191080 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:40.497609  191080 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-081397"
	I1211 23:56:40.497628  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497632  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-081397"
	I1211 23:56:40.497653  191080 addons.go:70] Setting volcano=true in profile "addons-081397"
	I1211 23:56:40.497674  191080 addons.go:239] Setting addon volcano=true in "addons-081397"
	I1211 23:56:40.497708  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497837  191080 addons.go:70] Setting volumesnapshots=true in profile "addons-081397"
	I1211 23:56:40.497852  191080 addons.go:239] Setting addon volumesnapshots=true in "addons-081397"
	I1211 23:56:40.496582  191080 addons.go:70] Setting default-storageclass=true in profile "addons-081397"
	I1211 23:56:40.497876  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497894  191080 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-081397"
	I1211 23:56:40.496631  191080 addons.go:239] Setting addon ingress-dns=true in "addons-081397"
	I1211 23:56:40.498289  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496621  191080 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-081397"
	I1211 23:56:40.498652  191080 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-081397"
	I1211 23:56:40.498685  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499011  191080 addons.go:70] Setting storage-provisioner=true in profile "addons-081397"
	I1211 23:56:40.499034  191080 addons.go:239] Setting addon storage-provisioner=true in "addons-081397"
	I1211 23:56:40.499062  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.496606  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.497596  191080 addons.go:239] Setting addon registry=true in "addons-081397"
	I1211 23:56:40.496653  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.499671  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.500663  191080 out.go:179] * Verifying Kubernetes components...
	I1211 23:56:40.502382  191080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:56:40.503922  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.506960  191080 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:56:40.507005  191080 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1211 23:56:40.507060  191080 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1211 23:56:40.506993  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.507197  191080 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-081397"
	I1211 23:56:40.507613  191080 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	W1211 23:56:40.508273  191080 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:56:40.508767  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.508846  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:56:40.508884  191080 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:56:40.508983  191080 addons.go:239] Setting addon default-storageclass=true in "addons-081397"
	I1211 23:56:40.509037  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:40.509123  191080 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:40.509134  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:56:40.509862  191080 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:40.509879  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1211 23:56:40.510705  191080 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1211 23:56:40.510765  191080 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:40.510708  191080 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:56:40.510709  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:56:40.510780  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:56:40.510963  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:56:40.512352  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1211 23:56:40.512423  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:40.512795  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1211 23:56:40.513366  191080 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1211 23:56:40.513405  191080 out.go:179]   - Using image docker.io/registry:3.0.0
	I1211 23:56:40.513427  191080 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:40.513856  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:56:40.513452  191080 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:56:40.513569  191080 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1211 23:56:40.514419  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:56:40.514823  191080 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:56:40.515501  191080 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:40.515566  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1211 23:56:40.516012  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:40.516028  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:56:40.516032  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:40.516099  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:56:40.516097  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:56:40.516114  191080 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:56:40.517202  191080 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:40.517226  191080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:56:40.517560  191080 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1211 23:56:40.517676  191080 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:56:40.517948  191080 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:40.517967  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:56:40.519009  191080 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:56:40.519029  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:56:40.519106  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:56:40.520326  191080 out.go:179]   - Using image docker.io/busybox:stable
	I1211 23:56:40.521667  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:56:40.521748  191080 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:40.521773  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:56:40.523191  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.524446  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:56:40.524538  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525508  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.525522  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.525556  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526184  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526857  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.526995  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.526987  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.527300  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:56:40.526876  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528176  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528215  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528450  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.528655  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.528687  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.528793  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.529400  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530020  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:56:40.530078  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.530252  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.530288  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531125  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531509  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.531550  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.531581  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.531691  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532336  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.532490  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532676  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.532971  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533016  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.533392  191080 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:56:40.533786  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.533419  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.533922  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534209  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534245  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534763  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.534785  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.534834  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.534900  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535083  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.535167  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535342  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.535606  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:56:40.535631  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:56:40.535965  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536268  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536305  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536400  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536418  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536548  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536583  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.536615  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536653  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.536963  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.536994  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.537838  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.537879  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.538098  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:40.540825  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541431  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:40.541502  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:40.541709  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	W1211 23:56:41.043758  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043809  191080 retry.go:31] will retry after 311.842554ms: ssh: handshake failed: read tcp 192.168.39.1:53944->192.168.39.2:22: read: connection reset by peer
	W1211 23:56:41.043894  191080 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.043909  191080 retry.go:31] will retry after 329.825082ms: ssh: handshake failed: read tcp 192.168.39.1:53950->192.168.39.2:22: read: connection reset by peer
	I1211 23:56:41.808354  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:56:41.808403  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:56:41.861654  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:56:41.861692  191080 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:56:41.896943  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:56:41.918961  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:56:41.924444  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:56:41.946144  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:56:42.009856  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:56:42.009896  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:56:42.018699  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:56:42.069883  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:56:42.072418  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1211 23:56:42.145123  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:56:42.186767  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:56:42.186812  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:56:42.259103  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:56:42.428120  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.93183404s)
	I1211 23:56:42.428248  191080 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.925817571s)
	I1211 23:56:42.428352  191080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:56:42.428498  191080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:56:42.452426  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:56:42.452489  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:56:42.484208  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:56:42.484275  191080 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:56:42.588545  191080 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:56:42.588585  191080 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:56:42.633670  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:56:42.633723  191080 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:56:42.637947  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:56:42.706175  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:56:42.706217  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:56:42.968807  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:56:42.968847  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:56:43.007497  191080 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.007532  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:56:43.028368  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:56:43.028403  191080 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:56:43.092788  191080 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.092826  191080 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:56:43.128649  191080 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:56:43.128687  191080 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:56:43.289535  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:56:43.289580  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:56:43.346982  191080 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:43.347023  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:56:43.401818  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:56:43.523249  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:56:43.586597  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:56:43.586642  191080 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:56:43.774067  191080 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:56:43.774118  191080 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:56:43.801000  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:56:44.025438  191080 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.025490  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:56:44.174620  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.277572584s)
	I1211 23:56:44.174769  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.250262195s)
	I1211 23:56:44.193708  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:56:44.193737  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:56:44.555609  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:44.920026  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:56:44.920060  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:56:45.697268  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:56:45.697305  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:56:46.254763  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:56:46.254799  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:56:46.581598  191080 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:46.581642  191080 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:56:46.687719  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:56:47.971016  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:56:47.975173  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976154  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:47.976199  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:47.976614  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.491380  191080 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:56:48.692419  191080 addons.go:239] Setting addon gcp-auth=true in "addons-081397"
	I1211 23:56:48.692544  191080 host.go:66] Checking if "addons-081397" exists ...
	I1211 23:56:48.695342  191080 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:56:48.698779  191080 main.go:143] libmachine: domain addons-081397 has defined MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699427  191080 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:32:89", ip: ""} in network mk-addons-081397: {Iface:virbr1 ExpiryTime:2025-12-12 00:56:09 +0000 UTC Type:0 Mac:52:54:00:2b:32:89 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-081397 Clientid:01:52:54:00:2b:32:89}
	I1211 23:56:48.699601  191080 main.go:143] libmachine: domain addons-081397 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:32:89 in network mk-addons-081397
	I1211 23:56:48.699980  191080 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/addons-081397/id_rsa Username:docker}
	I1211 23:56:48.892556  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.973548228s)
	I1211 23:56:49.408333  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.462135831s)
	I1211 23:56:49.408425  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.389664864s)
	I1211 23:56:51.938139  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.865666864s)
	I1211 23:56:51.938187  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.793007267s)
	I1211 23:56:51.938385  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.679223761s)
	I1211 23:56:51.938486  191080 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.509912418s)
	I1211 23:56:51.938505  191080 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.510132207s)
	I1211 23:56:51.938523  191080 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:56:51.938693  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.300704152s)
	I1211 23:56:51.938740  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.868817664s)
	I1211 23:56:51.938763  191080 addons.go:495] Verifying addon ingress=true in "addons-081397"
	I1211 23:56:51.938775  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.536910017s)
	I1211 23:56:51.938799  191080 addons.go:495] Verifying addon registry=true in "addons-081397"
	I1211 23:56:51.939144  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.415830154s)
	I1211 23:56:51.939191  191080 addons.go:495] Verifying addon metrics-server=true in "addons-081397"
	I1211 23:56:51.939242  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.138197843s)
	I1211 23:56:51.939362  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.383652629s)
	W1211 23:56:51.939405  191080 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939434  191080 retry.go:31] will retry after 326.794424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:56:51.939960  191080 node_ready.go:35] waiting up to 6m0s for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.941538  191080 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-081397 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:56:51.941540  191080 out.go:179] * Verifying registry addon...
	I1211 23:56:51.941553  191080 out.go:179] * Verifying ingress addon...
	I1211 23:56:51.943990  191080 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:56:51.944213  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:56:51.964791  191080 node_ready.go:49] node "addons-081397" is "Ready"
	I1211 23:56:51.964839  191080 node_ready.go:38] duration metric: took 24.813054ms for node "addons-081397" to be "Ready" ...
	I1211 23:56:51.964861  191080 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:56:51.964931  191080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:56:52.001706  191080 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:56:52.001747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.002821  191080 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:56:52.002849  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.266441  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:56:52.467902  191080 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-081397" context rescaled to 1 replicas
	I1211 23:56:52.469927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:52.473967  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:52.974199  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.067246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.503384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:53.644012  191080 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.948623338s)
	I1211 23:56:53.644102  191080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.679150419s)
	I1211 23:56:53.644155  191080 api_server.go:72] duration metric: took 13.147840239s to wait for apiserver process to appear ...
	I1211 23:56:53.644280  191080 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:56:53.644328  191080 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I1211 23:56:53.644007  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.956173954s)
	I1211 23:56:53.644412  191080 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-081397"
	I1211 23:56:53.646266  191080 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:56:53.647231  191080 out.go:179] * Verifying csi-hostpath-driver addon...
	I1211 23:56:53.648911  191080 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1211 23:56:53.650424  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:56:53.650455  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:56:53.650539  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:56:53.695860  191080 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I1211 23:56:53.698147  191080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:56:53.698187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:53.714330  191080 api_server.go:141] control plane version: v1.34.2
	I1211 23:56:53.714403  191080 api_server.go:131] duration metric: took 70.105256ms to wait for apiserver health ...
	I1211 23:56:53.714423  191080 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:56:53.722159  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:56:53.722205  191080 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:56:53.741176  191080 system_pods.go:59] 20 kube-system pods found
	I1211 23:56:53.741243  191080 system_pods.go:61] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.741269  191080 system_pods.go:61] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.741279  191080 system_pods.go:61] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.741289  191080 system_pods.go:61] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.741297  191080 system_pods.go:61] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.741307  191080 system_pods.go:61] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.741316  191080 system_pods.go:61] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.741323  191080 system_pods.go:61] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.741330  191080 system_pods.go:61] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.741340  191080 system_pods.go:61] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.741347  191080 system_pods.go:61] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.741358  191080 system_pods.go:61] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.741367  191080 system_pods.go:61] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.741382  191080 system_pods.go:61] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.741390  191080 system_pods.go:61] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.741401  191080 system_pods.go:61] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.741414  191080 system_pods.go:61] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.741427  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741445  191080 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.741455  191080 system_pods.go:61] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.741497  191080 system_pods.go:74] duration metric: took 27.063753ms to wait for pod list to return data ...
	I1211 23:56:53.741514  191080 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:56:53.789135  191080 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.789157  191080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:56:53.793775  191080 default_sa.go:45] found service account: "default"
	I1211 23:56:53.793806  191080 default_sa.go:55] duration metric: took 52.279991ms for default service account to be created ...
	I1211 23:56:53.793821  191080 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:56:53.844257  191080 system_pods.go:86] 20 kube-system pods found
	I1211 23:56:53.844307  191080 system_pods.go:89] "amd-gpu-device-plugin-djxv6" [4f5aeb19-64d9-4433-b64e-e6cfb3654839] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:56:53.844317  191080 system_pods.go:89] "coredns-66bc5c9577-dmswf" [30230e03-4081-4208-bdd5-a93b39aaaa41] Running
	I1211 23:56:53.844326  191080 system_pods.go:89] "coredns-66bc5c9577-prc7f" [f5b3faeb-71ca-42c9-b591-4b563dca360b] Running
	I1211 23:56:53.844334  191080 system_pods.go:89] "csi-hostpath-attacher-0" [fd013040-9f15-4172-87f5-15b174a58d87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:56:53.844340  191080 system_pods.go:89] "csi-hostpath-resizer-0" [75ee82ce-3700-4961-8ce6-bd9b588cc478] Pending
	I1211 23:56:53.844352  191080 system_pods.go:89] "csi-hostpathplugin-69v6v" [d2bf83fd-6890-4456-896a-d83906c2ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:56:53.844358  191080 system_pods.go:89] "etcd-addons-081397" [76acbe8b-6c34-47ed-9c17-d10d2b90f854] Running
	I1211 23:56:53.844364  191080 system_pods.go:89] "kube-apiserver-addons-081397" [aa5c2483-4778-415c-983d-77b4683c028a] Running
	I1211 23:56:53.844369  191080 system_pods.go:89] "kube-controller-manager-addons-081397" [f66f3f89-2978-45f0-85e3-9b2485e2c357] Running
	I1211 23:56:53.844377  191080 system_pods.go:89] "kube-ingress-dns-minikube" [7b7df0e3-b14f-46c9-8338-f54a7557bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:56:53.844387  191080 system_pods.go:89] "kube-proxy-jwqpk" [dd248790-eb90-4f63-bb25-4253ea30ba17] Running
	I1211 23:56:53.844394  191080 system_pods.go:89] "kube-scheduler-addons-081397" [d576bcfe-e1bc-4f95-be05-44d726aad7bf] Running
	I1211 23:56:53.844407  191080 system_pods.go:89] "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:56:53.844416  191080 system_pods.go:89] "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:56:53.844429  191080 system_pods.go:89] "registry-6b586f9694-f9q5b" [96c372a4-ae7e-4df5-9a48-525fc42f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:56:53.844439  191080 system_pods.go:89] "registry-creds-764b6fb674-fn77c" [4d72d75e-437b-4632-9fb1-3a7067c23d39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1211 23:56:53.844475  191080 system_pods.go:89] "registry-proxy-fdnc8" [3d8a40d6-255a-4a70-aee7-d5a6ce60f129] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:56:53.844488  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6pxqk" [9c319b4a-5f0f-4d81-9f15-6e457050470a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844498  191080 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7pg65" [460595b1-c11f-4b8a-9d7c-5805587a937c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:56:53.844507  191080 system_pods.go:89] "storage-provisioner" [0c582cdc-c50b-4759-b05c-e3b1cd92e04f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1211 23:56:53.844519  191080 system_pods.go:126] duration metric: took 50.689154ms to wait for k8s-apps to be running ...
	I1211 23:56:53.844532  191080 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:56:53.844608  191080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:56:53.902002  191080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:56:53.955676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:53.955845  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.160809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.448357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.453907  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.660400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:54.960099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:54.962037  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:54.993140  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.726594297s)
	I1211 23:56:54.993153  191080 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.148518984s)
	I1211 23:56:54.993221  191080 system_svc.go:56] duration metric: took 1.148683395s WaitForService to wait for kubelet
	I1211 23:56:54.993231  191080 kubeadm.go:587] duration metric: took 14.496919105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:56:54.993249  191080 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:56:55.001998  191080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1211 23:56:55.002046  191080 node_conditions.go:123] node cpu capacity is 2
	I1211 23:56:55.002095  191080 node_conditions.go:105] duration metric: took 8.839368ms to run NodePressure ...
	I1211 23:56:55.002114  191080 start.go:242] waiting for startup goroutines ...
	I1211 23:56:55.161169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.517092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:55.539796  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.579689  191080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.677622577s)
	I1211 23:56:55.581053  191080 addons.go:495] Verifying addon gcp-auth=true in "addons-081397"
	I1211 23:56:55.583166  191080 out.go:179] * Verifying gcp-auth addon...
	I1211 23:56:55.585775  191080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:56:55.610126  191080 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:56:55.610157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:55.684117  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:55.957671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:55.958053  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.094446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.159426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.454250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:56.454305  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.593123  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:56.698651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:56.955164  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:56.955254  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.097317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.160266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:57.455193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.593869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:57.657455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:57.952124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:57.953630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.091657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.192765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.448854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.454640  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:58.590861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:58.656664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:58.951563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:58.951970  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.092726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.156085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.453106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:59.455110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.594050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:59.659663  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:59.950597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:56:59.953854  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.098806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.158739  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.451426  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:00.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:00.656305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:00.954392  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:00.957143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.089837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.157925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.451549  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:01.451947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.592758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:01.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:01.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:01.950524  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.091801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.155816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.449634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.450369  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:02.591242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:02.655088  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:02.952327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:02.952622  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.090558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.166505  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.449517  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:03.450499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.590638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:03.656141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:03.950487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:03.950653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.092052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.164233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.452727  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:04.453010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.590564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:04.658766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:04.956776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:04.960214  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.089595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.158346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.454648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:05.455366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.589445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:05.725092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:05.950042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:05.953003  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.093507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.156581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.448896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:06.452118  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.589736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:06.660370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:06.952602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:06.952699  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.093794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.159924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:07.452486  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.593007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:07.655785  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:07.955585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:07.955714  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.092772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.159691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:08.452421  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:08.453004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:08.596649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:08.657754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.151194  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.163928  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.166605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.166806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.452575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:09.452859  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.591132  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:09.658223  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:09.953976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:09.958754  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.097815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.160643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.449852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.449848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:10.593346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:10.655349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:10.951129  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:10.958386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.091038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.163797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.451681  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:11.455196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.594544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:11.665061  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:11.951173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:11.952848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.093150  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.157974  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:12.452312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.591441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:12.661703  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:12.958989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:12.960103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.089485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.156074  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.452932  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:13.453001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.592446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:13.658121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:13.962529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:13.963557  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.091969  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.158221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.449389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.450691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:14.594295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:14.659320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:14.949072  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:14.952087  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.089407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.155332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.813442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.813494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:15.813503  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:15.813799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:15.954853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:15.957241  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.091368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.157225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.462043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.465005  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:16.590303  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:16.693434  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:16.948523  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:16.948597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.090370  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.155629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.450403  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:17.450602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.592008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:17.656775  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:17.952011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:17.953801  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.090174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.155951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.447617  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.448323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:18.590230  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:18.656537  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:18.948537  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:18.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.090670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.156440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.448193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.449148  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:19.589950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:19.655094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:19.949387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:19.950227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.155631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.448262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.449009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:20.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:20.655779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:20.952599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:20.952790  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.090743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.154683  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:21.451260  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:21.452256  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:21.593154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:21.656811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.109419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.111778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.111954  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.158011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.452303  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:22.452748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.590963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:22.655856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:22.949568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:22.949619  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.091094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.155741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.449880  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:23.449919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.590590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:23.658406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:23.948819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:23.949527  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.090686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.154696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.449105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.449431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:24.591490  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:24.656162  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:24.948671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:24.948867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.089628  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.157506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.448637  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:25.449144  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.589959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:25.654962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:25.949839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:25.950510  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.091561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.448681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:26.590622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:26.657217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:26.948184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:26.950039  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.089200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.155324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.449676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:27.449798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.590267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:27.655290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:27.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:27.948982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.090233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.155268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.448106  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:28.448387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.589756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:28.656215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:28.948715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:28.949727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.155563  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.448981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.449967  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:29.589372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:29.656746  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:29.951190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:29.951266  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.089966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.156024  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.449807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:30.449940  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.592795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:30.655965  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:30.949686  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:30.949854  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.089144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.155728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.448249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.451576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:31.590176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:31.656389  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:31.949905  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:31.950451  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.090191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.156400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.449602  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:32.449836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.591164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:32.657213  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:32.948520  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:32.948804  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.089649  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.156050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.450227  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:33.590456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:33.656274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:33.949256  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:33.949347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.091203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.156547  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.450354  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.450411  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:34.591349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:34.656156  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:34.948431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:34.948893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.089378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.156784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.450919  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:35.451766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.589587  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:35.656818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:35.949417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:35.950715  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.090779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.155710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.452002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.452240  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:36.590343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:36.655697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:36.949354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:36.949385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.091333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.155660  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.448936  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:37.449075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.590116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:37.656050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:37.949528  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:37.950239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.156630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.449400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:38.449825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.590511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:38.655832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:38.948985  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:38.949093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.090158  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.155820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.449629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.451242  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:39.590400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:39.656829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:39.948865  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:39.949106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.089281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.156612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.450580  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:40.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.590980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:40.655008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:40.949712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:40.949853  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.089939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.155401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.448080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:41.451541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.590421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:41.656608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:41.950025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:41.950358  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.090340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.159954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.450058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:42.450329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.589818  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:42.655716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:42.948985  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:42.952252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.090380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.155314  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.450015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:43.450202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:43.655086  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:43.948401  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:43.949453  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.090744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.154784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:44.449642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.590645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:44.656686  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:44.950021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:44.951009  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.090020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.155822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:45.449646  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:45.656192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:45.949128  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:45.949580  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.091176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.155290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.448997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.450442  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:46.590802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:46.654435  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:46.949893  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:46.950255  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.091631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.156353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.450093  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:47.455744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.622817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:47.657485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:47.951291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:47.953670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.093758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.155393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.452298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.452366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:48.592111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:48.657572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:48.951626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:48.952512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.091082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.157173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.452908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.453973  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:49.591765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:49.699112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:49.951994  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:49.953086  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.090983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.162358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.452611  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.453823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:50.593450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:50.664907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:50.961300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:50.961709  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.105008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.168542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.460773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.463367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:51.596820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:51.659982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:51.954007  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:51.956978  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.090564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.156735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.459306  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:52.461605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.591646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:52.659476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:52.949249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:52.949360  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.091342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.158735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.451408  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.454585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:53.590776  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:53.656237  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:53.954524  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:53.954679  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.095794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.159448  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.576047  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:54.576308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.590001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:54.659406  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:54.950589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:54.950691  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.092084  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.157456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.451531  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.451907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:55.590653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:55.655648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:55.949374  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:55.953638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.090027  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.156602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.448573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:56.448625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.593728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:56.658937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:56.952879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:56.952929  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.091934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.159057  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.451436  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:57.455516  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.591262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:57.659040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:57.954096  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:57.955115  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.092045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.156829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.449510  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:58.452029  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.591835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:58.655523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:58.950729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:58.951027  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.091806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.192766  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.450923  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.450927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:57:59.589799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:57:59.654677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:57:59.950001  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:57:59.950014  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.090853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.157042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.448336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:00.448337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.592094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:00.658087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:00.957344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:00.957336  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.092515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.156002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.448332  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:01.450557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.590308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:01.655760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:01.948943  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:01.948994  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.090034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.155101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.448750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.451925  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:02.591378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:02.692860  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:02.948711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:02.949373  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.090905  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.155274  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.564036  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.566077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:03.589166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:03.656333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:03.950104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:03.951138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.090344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.155950  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.449528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:04.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.590190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:04.655882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:04.949372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:04.949508  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.090348  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.156443  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.449652  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:05.449659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.590664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:05.657339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:05.948372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:05.949962  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.090065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.157993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.447621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.447687  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:06.589658  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:06.656748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:06.950654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:06.952348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.090424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.154888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.449307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:07.449391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:07.655886  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:07.949784  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:07.950390  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.090645  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.154567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.450533  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:08.451325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.590268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:08.657358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:08.950295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:08.950733  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.091051  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.155807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.449202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.449232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:09.590096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:09.654983  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:09.950294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:09.950637  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.092096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.155487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.449477  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:10.450235  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.592429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:10.655383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:10.950193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:10.951385  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.090841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.154640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.448065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:11.448340  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.590017  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:11.656300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:11.950170  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:11.950312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.156842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.450055  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:12.451008  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.590233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:12.656044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:12.950138  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:12.950258  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.090444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.155597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.449740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.449778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:13.591284  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:13.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:13.948617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:13.949836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.090622  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.156895  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.450176  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:14.589623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:14.656671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:14.950841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:14.951121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.090529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.155811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.449246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.449410  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:15.591082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:15.656904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:15.949103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:15.949272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.090640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.155039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.447514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.449003  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:16.589821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:16.655674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:16.952654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:16.953063  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.091612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.159499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.449631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:17.449881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.590494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:17.655629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:17.951351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:17.951511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.090316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.155509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.450535  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:18.451342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.591041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:18.655519  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:18.949171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:18.949503  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.089765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.155836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.449076  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:19.452236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.590791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:19.655570  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:19.949527  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:19.949612  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.090142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.154962  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.448016  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:20.450402  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.589309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:20.655296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:20.949277  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:20.951681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.089881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.154879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.448360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:21.448858  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.589856  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:21.655417  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:21.949400  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:21.949574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.090271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.155368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.449742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.450560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:22.591054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:22.656707  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:22.950712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:22.950890  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.091160  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.451079  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:23.451281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.590720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:23.654815  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:23.950160  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:23.950337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.090330  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.156001  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.447566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.450052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:24.591509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:24.656932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:24.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:24.949405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.449568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.450447  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:25.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:25.654957  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:25.950271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:25.951174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.091002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.155568  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.449372  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.449561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:26.590898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:26.656087  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:26.951452  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:26.953541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.091542  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.155995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.451595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.452488  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:27.591591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:27.657762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:27.949590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:27.952182  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.090479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.155291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.450004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:28.590103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:28.655339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:28.953363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:28.954717  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.093694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.155028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.449055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.450347  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:29.590581  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:29.656654  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:29.950515  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:29.950799  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.090326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.155485  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.448572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.449692  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:30.590878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:30.655807  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:30.956951  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:30.957577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.092534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.155903  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.449802  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.450326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:31.593269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:31.656218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:31.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:31.949934  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.091982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.155603  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.449522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.451425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:32.590687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:32.655082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:32.950545  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:32.950713  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.091712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.156900  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.450998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:33.451121  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.592756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:33.655387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:33.956059  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:33.956346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.090541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.155676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.449252  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:34.449255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.589931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:34.655778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:34.950791  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:34.951042  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.089716  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.155182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.447641  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:35.590101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:35.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:35.949158  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:35.951312  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.090687  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.448272  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:36.448489  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.591352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:36.657569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:36.950696  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:36.952142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.090121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.155891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.448859  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:37.449811  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.589598  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:37.655164  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:37.950606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:37.950726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.089931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.155402  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.449956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:38.450889  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.590982  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:38.655741  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:38.950070  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:38.950118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.090737  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.156071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.448413  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:39.448760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.590316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:39.655228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:39.948192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:39.948232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.089574  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.156012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.448864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.451601  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:40.592083  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:40.656209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:40.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:40.949127  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.091091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.449778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:41.450851  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.589659  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:41.656116  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:41.949174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:41.949802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.090816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.155802  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.450496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:42.452958  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.591015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:42.655595  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:42.949982  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:42.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.091554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.155772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.451215  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.451399  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:43.590489  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:43.655665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:43.949328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:43.950974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.092276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.155455  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.449429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:44.449512  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.591046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:44.655586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:44.949500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:44.951599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.094722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.154774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.449770  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:45.451691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.590761  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:45.655352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:45.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:45.949864  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.156103  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.449181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:46.449779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.591976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:46.655596  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:46.949173  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:46.950623  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.093977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.156056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.450281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:47.450897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.591849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:47.655891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:47.950318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:47.951578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.091959  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.154872  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.450075  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.451948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:48.589733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:48.655026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:48.947902  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:48.948922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.090363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.155236  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.449018  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:49.449294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.589648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:49.654518  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:49.949085  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:49.949327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.089715  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.155336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.450276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:50.450610  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.590265  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:50.655617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:50.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:50.951287  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.090631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.155403  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.449820  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.451010  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:51.591075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:51.654839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:51.949284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:51.950009  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.090582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.157494  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.448608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.450368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:52.590998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:52.655180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:52.948718  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:52.950284  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.090712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.158605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.451168  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.451536  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:53.589760  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:53.657022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:53.948734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:53.951371  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.090202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.155484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.448582  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:54.450090  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.589620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:54.656268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:54.950155  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:54.950342  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.092526  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.155567  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.448897  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.450647  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:55.590184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:55.656034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:55.948843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:55.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.092633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.155535  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.449050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.450032  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:56.589978  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:56.655578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:56.951227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:56.951391  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.089968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.156011  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.449111  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:57.449543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.591323  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:57.656295  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:57.949838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:57.950157  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.090263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.155586  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.450591  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:58.450796  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.590735  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:58.655042  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:58.948769  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:58.949101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.089480  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.156356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.450318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.452097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:58:59.589757  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:58:59.656038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:58:59.951264  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:58:59.955025  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.093307  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.169810  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.453668  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:00.453747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.591664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:00.662082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:00.958327  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:00.958678  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.093618  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.191821  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.455185  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.458398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:01.593233  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:01.657309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:01.950520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:01.956319  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.092841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.158368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.454368  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.454386  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:02.592341  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:02.658118  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:02.969970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:02.970262  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.091543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.193034  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.478206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:03.494398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.601217  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:03.659210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:03.956276  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:03.961174  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.090843  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.154383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.451688  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.451709  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:04.590930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:04.656263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:04.949363  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:04.950133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.102487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.156487  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.456245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.457922  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:05.596196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:05.660935  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:05.949095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:05.954162  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.098801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.161484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.448923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:06.452592  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.590210  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:06.659607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:06.954480  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:06.955630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.094202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.161252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.451546  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:07.451627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.599662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:07.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:07.951554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:07.951751  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.096946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.157724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.453200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.453207  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:08.592711  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:08.695126  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:08.958140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:08.958561  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.090111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.155633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.450116  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:09.595338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:09.656262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:09.950903  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:09.951773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.089779  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.155949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.448520  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.449409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:10.599275  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:10.659673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:10.948979  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:10.950560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.090875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.155105  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.449575  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:11.450246  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.600631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:11.658293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:11.950730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:11.950966  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.090374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.157299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.449320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.449345  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:12.593214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:12.664092  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:12.950600  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:12.951052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.090059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.154911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.450841  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:13.450957  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.592263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:13.655883  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:13.948080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:13.948214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.089646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.157040  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.448769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.449141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:14.590729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:14.654626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:14.949399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:14.951103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.092446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.156294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.452499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:15.452500  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.590621  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:15.657627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:15.951795  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:15.952077  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.089680  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.156176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.448324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:16.448431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:16.656743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:16.948906  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:16.949692  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.091257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.155187  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.450607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:17.450848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.589365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:17.655407  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:17.948856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:17.949516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.091507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.155888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.451560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.452505  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:18.590970  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:18.655165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:18.947845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:18.949425  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.090758  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.154844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.450275  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:19.451846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.589989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:19.655133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:19.950045  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:19.950331  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.090153  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.155708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.448562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:20.591627  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:20.655924  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:20.947974  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:20.948912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.089422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.155655  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.449438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.449734  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:21.589919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:21.657291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:21.949251  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:21.952143  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.091386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.157354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.448913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.449226  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:22.590529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:22.657745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:22.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:22.948933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.089214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.157137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.450670  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:23.450902  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.590522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:23.656379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:23.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:23.950625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.091355  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.157165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.448825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.453054  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:24.590234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:24.657541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:24.949335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:24.951024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.092211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.154931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.448939  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:25.448993  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.589597  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:25.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:25.948738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:25.949046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.091849  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.154651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.448387  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.448440  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:26.590516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:26.656196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:26.949998  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:26.950307  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.090220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.156297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.450874  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:27.451092  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.589664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:27.655496  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:27.949431  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:27.951612  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.090350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.155161  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.448979  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.449151  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:28.589861  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:28.655413  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:28.949789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:28.951855  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.090331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.157070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.449482  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.450006  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:29.590813  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:29.655573  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:29.949907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:29.950025  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.091458  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.158405  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.447779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.448834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:30.591091  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:30.655875  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:30.950684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:30.953289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.091332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.448823  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:31.450781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.591809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:31.656075  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:31.948759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:31.948968  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.091729  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.154747  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.449837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:32.590571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:32.655646  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:32.949282  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:32.949595  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.090694  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.155167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.451071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.451405  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:33.591171  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:33.656119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:33.949262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:33.949454  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.090283  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.155781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.450392  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.451683  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:34.591571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:34.655909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:34.949219  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:34.949408  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.089980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.154740  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.450095  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:35.450349  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.591227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:35.692481  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:35.949141  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:35.951867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.090822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.156098  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.448722  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:36.449538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.589624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:36.657137  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:36.948984  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:36.949366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.091350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.157094  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.448182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:37.450253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.591119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:37.656425  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:37.948975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.089759  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.155828  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.451552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.451647  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:38.589973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:38.655877  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:38.951367  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.091390  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.405012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.452050  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:39.452196  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.595044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:39.665344  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:39.953209  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:39.953555  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.092147  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.155320  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.451110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:40.451951  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.591316  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:40.655931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:40.950017  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:40.951388  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.090401  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.155143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.448442  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.449115  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:41.591565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:41.656306  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:41.949112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:41.949534  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.091549  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.155830  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.449887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.450125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:42.591409  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:42.658038  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:42.948502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:42.951166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.090200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.156509  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.450320  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:43.450913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.592334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:43.656125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:43.948166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:43.949168  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.089675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.155311  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.447960  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.449667  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:44.592196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:44.655822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:44.952752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:44.952747  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.155289  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.448550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:45.449206  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.593908  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:45.656032  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:45.949589  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:45.949968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.089906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.156255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.448239  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:46.448309  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.590954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:46.656897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:46.950439  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:46.952416  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.090308  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.156374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.449653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:47.450249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:47.655793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:47.948702  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:47.948870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.089879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.155753  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.448357  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.450383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:48.590577  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:48.656031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:48.948556  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:48.950049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.089412  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.156350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.449163  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:49.449205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.590039  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:49.655560  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:49.949665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:49.950181  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.090049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.155293  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.448667  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:50.449257  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.590165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:50.655541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:50.950136  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:50.951139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.092122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.155044  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.448983  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.449212  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:51.595578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:51.696454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:51.949343  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:51.949398  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.090651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.156291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.449203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:52.449249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.590754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:52.654991  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:52.948372  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:52.948385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.091609  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.156662  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.450157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.451318  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:53.590507  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:53.658421  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:53.949207  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:53.949267  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.090069  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.155373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.448514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.449541  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:54.591594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:54.656653  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:54.949522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:54.950322  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.092501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.156404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.449073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.449090  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:55.590073  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:55.662793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:55.950067  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:55.950494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.089914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.155999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.449360  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.449507  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:56.590986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:56.655362  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:56.949305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:56.950892  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.090093  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.156315  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.449124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.449348  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:57.589882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:57.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:57.949449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:57.949662  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.090373  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.155727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.449522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.450610  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:58.592143  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:58.656332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:58.948528  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:58.949696  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.090864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.155322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.450350  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.450722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:59:59.590799  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:59:59.655636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:59:59.950576  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:59:59.950769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.090194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.156754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.449577  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:00.450369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.591719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:00.655897  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:00.950338  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:00.950455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.090278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.156266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.452423  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:01.453174  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.591221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:01.657914  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:01.948554  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:01.948798  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.090601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.157198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.447995  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.448026  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:02.590202  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:02.657562  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:02.949780  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:02.952121  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.090613  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.155733  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.449937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.449933  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:03.590926  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:03.658062  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:03.949170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:03.949810  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.091414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.155665  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.448744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.448999  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:04.589836  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:04.656449  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:04.948744  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:04.948893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.091208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.156906  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.449064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.449106  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:05.590901  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:05.656845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:05.950206  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:05.950384  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.090523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.155990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.449777  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.450837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:06.590182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:06.656853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:06.948285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:06.948607  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.089995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.155347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.449281  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:07.590385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:07.656186  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:07.948484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:07.949766  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.090129  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.156334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.449099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:08.590056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:08.655353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:08.948870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:08.949837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.089440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.155221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.448572  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.449128  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:09.590937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:09.655774  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:09.950643  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:09.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.091963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.157142  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.447872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:10.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.590167  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:10.655410  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:10.948681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:10.950881  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.090375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.157845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.448987  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.451786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:11.589278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:11.656898  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:11.948648  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:11.951845  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.089679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.448020  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:12.448554  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.591529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:12.657808  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:12.949844  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:12.950340  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.156837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.449856  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:13.449881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.590325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:13.656633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:13.951242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:13.951287  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.089997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.155198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.448400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:14.448585  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.590551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:14.656896  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:14.949893  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:14.951119  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.090441  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.155404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.451852  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:15.452266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.591327  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:15.656049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:15.952977  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:15.953024  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.093981  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.156724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.448908  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:16.451378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.592066  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:16.657270  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:16.948987  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:16.949080  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.090484  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.158533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.449593  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.449614  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:17.591576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:17.656835  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:17.952242  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:17.952334  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.091234  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.156793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.450858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.451103  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:18.590911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:18.655661  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:18.950782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:18.950840  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.091124  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.154772  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.449026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:19.451771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.590291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:19.657065  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:19.951301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:19.951653  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.089930  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.156561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.448782  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:20.453763  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.591804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:20.655268  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:20.948366  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:20.948454  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.090508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.158196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.449441  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:21.590734  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:21.655940  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:21.950169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:21.950328  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.089889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.157558  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.449534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:22.449815  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.590378  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:22.655963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:22.947894  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:22.948182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.090569  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.156273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.450639  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.450816  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:23.589218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:23.655281  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:23.949543  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:23.949989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.090664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.155725  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.449198  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:24.451299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.590352  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:24.656205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:24.947767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:24.948451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.090431  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.156379  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.449358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.449672  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:25.589853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:25.654878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:25.949904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:25.950152  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.089797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.449336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.450596  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:26.592346  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:26.657848  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:26.949333  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:26.950229  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.090752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.157107  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.449820  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:27.450010  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.590514  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:27.657927  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:27.951547  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:27.952176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.090550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.156767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.450227  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.451522  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:28.591521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:28.656790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:28.949538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:28.949826  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.090055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.155834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.450097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:29.450167  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.590630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:29.655299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:29.949633  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:29.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.089708  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.154762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.449094  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.450366  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:30.590666  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:30.655870  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:30.948836  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:30.948972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.089244  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.155334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.448853  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:31.449043  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.590675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:31.655919  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:31.950253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:31.951767  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.089750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.155423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.449657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:32.590358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:32.656797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:32.950101  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:32.950269  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.090803  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.154674  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.454615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.454885  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:33.589479  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:33.656942  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:33.953188  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:33.954139  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.091629  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.156823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.448754  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:34.449071  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.589301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:34.656551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:34.948611  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:34.950196  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.091634  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.160584  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.448684  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.449322  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:35.589630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:35.655232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:35.947899  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:35.948842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.090521  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.155599  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.449031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.449382  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:36.591743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:36.655255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:36.948722  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:36.949779  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.090918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.157590  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.448713  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.449843  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:37.589677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:37.656720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:37.949867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:37.950644  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.093262  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.156220  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.448943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:38.450543  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.591971  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:38.655424  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:38.949892  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:38.951285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.090837  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:39.450060  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.590012  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:39.655544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:39.949824  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:39.954336  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.095357  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.155946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.451271  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.452848  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:40.590990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:40.655214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:40.963350  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:40.967975  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.092691  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.157255  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.461052  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.464606  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:41.592100  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:41.658218  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:41.951346  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:41.953539  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.170296  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.449833  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:42.449879  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.589631  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:42.655925  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:42.952512  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:42.953941  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.090620  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.155937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.449805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.451726  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:43.590975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:43.655839  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:43.949267  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:43.950221  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.091825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.158335  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.448909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:44.450502  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.590179  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:44.656226  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:44.948916  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:44.950140  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.089907  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.156705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.449149  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:45.449285  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.590294  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:45.655955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:45.948817  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:45.951525  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.091170  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.155968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.448814  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:46.450026  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.590257  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:46.655476  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:46.950202  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:46.950358  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.091544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.156635  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.448759  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:47.450582  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.591771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:47.655438  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:47.951589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:47.951950  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.091551  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.155719  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.449736  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.449931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:48.590742  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:48.656337  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:48.951175  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:48.951871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.089625  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.154672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.449387  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:49.451177  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.589995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:49.655055  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:49.947911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:49.948323  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.090498  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.155724  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.448625  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:50.449769  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.589819  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:50.656445  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:50.952353  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:50.952565  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.091804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.155910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.449736  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.452867  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:51.590141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:51.655015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:51.949047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:51.951778  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.091793  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.156022  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.448369  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.448494  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:52.592499  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:52.657185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:52.948673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:52.949634  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.092041  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.157391  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.451159  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:53.451297  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.592141  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:53.655589  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:53.949414  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:53.949654  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.090980  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.157644  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.449578  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:54.449942  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.592657  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:54.655642  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:54.949887  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:54.950225  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.090862  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.155383  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.448872  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.450478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:55.592070  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:55.655878  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:55.950573  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:55.951643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.090608  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.156601  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.449633  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.449740  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:56.589768  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:56.656648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:56.951253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:56.951560  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.090880  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.155285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.450738  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.452046  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:57.590263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:57.657500  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:57.950152  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:57.950364  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.091638  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.193386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.450222  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:58.450351  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.591052  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:58.656102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:58.948215  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:58.948208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.090720  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.155790  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.449636  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:00:59.450870  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.589564  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:00:59.656184  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:00:59.948230  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:00:59.948326  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.091313  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.155446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.449946  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.449997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:00.590931  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:00.655953  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:00.949430  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:00.949437  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.091948  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.156208  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.452056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.452238  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:01.590869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:01.655918  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:01.949266  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:01.950875  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.094697  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.155278  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.456102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.456104  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:02.591508  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:02.657972  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:02.950365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:02.950787  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.155749  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.451192  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.451626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:03.592675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:03.657198  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:03.949710  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:03.950534  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.090705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.154619  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.450263  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:04.451232  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.589795  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:04.654975  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:04.951313  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:04.952632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.093185  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.156889  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.448891  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:05.452008  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.589422  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:05.655673  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:05.954272  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:05.955495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.090800  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.166615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.451261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.451837  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:06.592681  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:06.655679  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:06.949675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:06.949685  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.091261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.156385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.449932  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:07.450455  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.590827  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:07.655109  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:07.949211  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:07.950064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.090887  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.154572  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.450690  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.450871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:08.590523  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:08.655973  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:08.948114  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:08.949750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.090989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.155955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.449016  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.449347  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:09.590817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:09.656200  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:09.950977  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:09.951430  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.091695  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.155672  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.448805  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:10.449149  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.591881  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:10.655305  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:10.948943  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:10.949765  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.089778  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.156846  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.450576  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:11.451630  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.591910  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:11.657557  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:11.949551  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:11.951423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.090384  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.160393  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.453917  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.453927  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:12.593211  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:12.659806  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:12.963298  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:12.966253  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.093949  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.194867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.468169  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:13.473451  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.607669  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:13.664252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:13.963788  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:13.970682  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.100566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.183386  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.481122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.481147  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:14.591963  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:14.659279  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:14.953923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:14.957839  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.091640  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.160946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.453995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.454249  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:15.592976  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:15.657252  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:15.952201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:15.954099  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.091133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.159988  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.451755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:16.452022  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.593102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:16.657191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:16.949980  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:16.950946  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.091395  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.156727  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.453497  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.454292  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:17.590745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:17.658023  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:17.953077  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:17.954404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.144325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.163884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.506329  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:18.507416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.598801  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:18.658864  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:18.951533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:18.951768  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.091399  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.157617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.453370  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.453419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:19.590356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:19.656750  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:19.949694  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:19.952780  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.093710  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.162888  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.455842  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:20.457429  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.597047  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:20.658966  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:20.952832  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:20.956314  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.093605  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.160516  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.449838  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.454368  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:21.590229  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:21.657324  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:21.951876  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:21.955993  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.093456  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.156844  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.452923  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.453818  191080 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:01:22.591894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:22.664786  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:22.950056  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:22.950755  191080 kapi.go:107] duration metric: took 4m31.006766325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:01:23.091356  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.164794  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.496726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:23.601172  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:23.663423  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:23.954300  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.094097  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.156533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.450111  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:24.590446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:24.655954  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:24.951486  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.101144  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.157114  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.459936  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:25.589209  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:25.655404  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:25.949290  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.091205  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.192561  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.449239  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:26.594301  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:01:26.695112  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:26.950968  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.090419  191080 kapi.go:107] duration metric: took 4m31.504642831s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:01:27.092322  191080 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-081397 cluster.
	I1212 00:01:27.093973  191080 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:01:27.095595  191080 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:01:27.155630  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.448192  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:27.656676  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:27.949602  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.156035  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.452122  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:28.656798  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:28.951030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.155812  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.450030  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:29.655506  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:29.950947  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.156571  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.449689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:30.657986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:30.952997  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.155349  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.449194  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:31.657440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:31.950318  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.157071  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.449726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:32.657033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:32.950261  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.156773  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.450869  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:33.655552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:33.950125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.156033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.449419  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:34.663651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:34.951541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.156031  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.450253  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:35.655842  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:35.948990  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.156446  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.449076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:36.656334  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:36.949221  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.155204  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.448992  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:37.656232  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:37.948670  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.155550  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:38.655986  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:38.950165  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.156285  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.448380  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:39.656058  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:39.950214  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.157325  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.449511  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:40.656623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:40.952375  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.157648  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.449624  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:41.657125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:41.951249  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.157745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.451135  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:42.657530  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:42.949771  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.155904  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.450113  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:43.655365  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:43.950157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.156180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.450046  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:44.655809  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:44.950604  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.155614  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.448273  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:45.656354  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:45.950705  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.156364  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.448416  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:46.658552  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:46.949651  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.158180  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.452700  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:47.656868  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:47.949912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.156755  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.451939  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:48.656432  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:48.950201  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.156157  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.448332  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:49.656228  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:49.950259  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.157269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:50.656199  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:50.950248  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.156922  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.449858  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:51.658522  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:51.950331  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.158342  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.452607  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:52.657583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:52.952541  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.156712  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.452538  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:53.656385  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:53.949617  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.154792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.450797  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:54.655995  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:54.950745  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.155328  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.448751  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:55.655216  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:55.949363  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.157592  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.451921  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:56.664544  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:56.958059  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.156884  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.449911  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:57.659329  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:57.950478  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.157728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.449728  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:58.656867  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:58.950675  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.158989  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.450999  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:01:59.661594  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:01:59.948955  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.160422  191080 kapi.go:107] duration metric: took 5m6.50988483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:02:00.450671  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:00.952269  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.449529  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:01.950781  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.450250  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:02.953623  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.451822  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:03.951054  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.452684  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:04.952913  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.449851  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:05.951096  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.448632  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:06.949689  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.450190  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:07.949743  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.449834  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:08.949956  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.449343  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:09.950154  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.449652  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:10.953533  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.448912  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:11.950203  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.450650  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:12.950028  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.451182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:13.950015  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.449762  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:14.949166  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.450181  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:15.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.448817  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:16.948583  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.449804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:17.951493  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.450240  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:18.951299  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.450677  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:19.949706  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.449531  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:20.950756  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.450374  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:21.951339  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.449394  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:22.950909  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.477937  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:23.951049  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.448664  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:24.949615  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.449359  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:25.949444  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.450501  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:26.949804  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.450825  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:27.948894  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:28.950004  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.450317  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:29.949495  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.456871  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:30.948730  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.449752  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:31.950182  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.450002  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:32.948690  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.448231  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:33.950565  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.450626  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:34.949823  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.450102  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:35.948400  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.449033  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:36.949643  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.449021  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:37.948018  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.455139  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:38.950450  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.450566  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:39.949291  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.450245  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:40.951396  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.451789  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:41.949099  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.450082  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:42.954847  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.450792  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:43.949191  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.449125  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:44.949110  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.453064  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:45.948748  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.449176  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:46.948540  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.448415  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:47.950829  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.450193  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:48.950076  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.450726  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:49.949133  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.448258  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:50.949440  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.448882  191080 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:02:51.944788  191080 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1212 00:02:51.944836  191080 kapi.go:107] duration metric: took 6m0.000623545s to wait for kubernetes.io/minikube-addons=registry ...
	W1212 00:02:51.944978  191080 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1212 00:02:51.946936  191080 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1212 00:02:51.948508  191080 addons.go:530] duration metric: took 6m11.452163579s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass storage-provisioner inspektor-gadget cloud-spanner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1212 00:02:51.948603  191080 start.go:247] waiting for cluster config update ...
	I1212 00:02:51.948631  191080 start.go:256] writing updated cluster config ...
	I1212 00:02:51.949105  191080 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:51.959702  191080 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:51.966230  191080 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.976818  191080 pod_ready.go:94] pod "coredns-66bc5c9577-prc7f" is "Ready"
	I1212 00:02:51.976851  191080 pod_ready.go:86] duration metric: took 10.502006ms for pod "coredns-66bc5c9577-prc7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.982130  191080 pod_ready.go:83] waiting for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.989125  191080 pod_ready.go:94] pod "etcd-addons-081397" is "Ready"
	I1212 00:02:51.989162  191080 pod_ready.go:86] duration metric: took 7.000579ms for pod "etcd-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:51.992364  191080 pod_ready.go:83] waiting for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.000110  191080 pod_ready.go:94] pod "kube-apiserver-addons-081397" is "Ready"
	I1212 00:02:52.000155  191080 pod_ready.go:86] duration metric: took 7.740136ms for pod "kube-apiserver-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.004027  191080 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.365676  191080 pod_ready.go:94] pod "kube-controller-manager-addons-081397" is "Ready"
	I1212 00:02:52.365718  191080 pod_ready.go:86] duration metric: took 361.647196ms for pod "kube-controller-manager-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.569885  191080 pod_ready.go:83] waiting for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:52.966570  191080 pod_ready.go:94] pod "kube-proxy-jwqpk" is "Ready"
	I1212 00:02:52.966607  191080 pod_ready.go:86] duration metric: took 396.689665ms for pod "kube-proxy-jwqpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.167508  191080 pod_ready.go:83] waiting for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566695  191080 pod_ready.go:94] pod "kube-scheduler-addons-081397" is "Ready"
	I1212 00:02:53.566729  191080 pod_ready.go:86] duration metric: took 399.188237ms for pod "kube-scheduler-addons-081397" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:02:53.566746  191080 pod_ready.go:40] duration metric: took 1.607005753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:02:53.630859  191080 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 00:02:53.633243  191080 out.go:179] * Done! kubectl is now configured to use "addons-081397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.068383066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765497996068336337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9cc83e9-0802-4e73-8e31-a0d604ff8770 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.069528902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d3ed88e-c589-465b-a34b-acc7b172e0d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.069616421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d3ed88e-c589-465b-a34b-acc7b172e0d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.070156102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5,PodSandboxId:490a3947ca6f1e530ebcdbdc387f0ed5a13fe2e381934b43251d8ff7fae647bd,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb251c438ab2d3e27301a2b8cfe6a0fa0e2e6f8635fdb9f2bc4c98033e229f79,State:CONTAINER_RUNNING,CreatedAt:1765497420112043812,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rbpjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649f4f-f712-4939-86ae-d4e2f8
7acc0a,},Annotations:map[string]string{io.kubernetes.container.hash: a749151f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.k
ubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7b
d5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04
badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-a
piserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:
d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-co
llector/interceptors.go:74" id=1d3ed88e-c589-465b-a34b-acc7b172e0d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.125610413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18ee6ec2-afec-407f-bab0-5e3b2f313428 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.126236027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18ee6ec2-afec-407f-bab0-5e3b2f313428 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.128416088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cc14a3b-23e2-4ace-a64a-84ce3baf7b50 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.129587503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765497996129552357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cc14a3b-23e2-4ace-a64a-84ce3baf7b50 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.131396432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fa5ee47-fa97-4634-b526-907b3037e6c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.131486599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fa5ee47-fa97-4634-b526-907b3037e6c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.132885204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5,PodSandboxId:490a3947ca6f1e530ebcdbdc387f0ed5a13fe2e381934b43251d8ff7fae647bd,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb251c438ab2d3e27301a2b8cfe6a0fa0e2e6f8635fdb9f2bc4c98033e229f79,State:CONTAINER_RUNNING,CreatedAt:1765497420112043812,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rbpjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649f4f-f712-4939-86ae-d4e2f8
7acc0a,},Annotations:map[string]string{io.kubernetes.container.hash: a749151f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.k
ubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7b
d5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04
badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-a
piserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:
d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-co
llector/interceptors.go:74" id=8fa5ee47-fa97-4634-b526-907b3037e6c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.172394872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dddf0c7-f952-43b7-a024-9a4ab870172f name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.172911874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dddf0c7-f952-43b7-a024-9a4ab870172f name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.175142302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a17441e8-8737-4338-9df9-c78808734979 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.176510489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765497996176474069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a17441e8-8737-4338-9df9-c78808734979 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.178088938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d6533f6-8bdb-49f5-9512-3a721a10dddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.178285250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d6533f6-8bdb-49f5-9512-3a721a10dddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.178738554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5,PodSandboxId:490a3947ca6f1e530ebcdbdc387f0ed5a13fe2e381934b43251d8ff7fae647bd,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb251c438ab2d3e27301a2b8cfe6a0fa0e2e6f8635fdb9f2bc4c98033e229f79,State:CONTAINER_RUNNING,CreatedAt:1765497420112043812,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rbpjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649f4f-f712-4939-86ae-d4e2f8
7acc0a,},Annotations:map[string]string{io.kubernetes.container.hash: a749151f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.k
ubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7b
d5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04
badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-a
piserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:
d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-co
llector/interceptors.go:74" id=5d6533f6-8bdb-49f5-9512-3a721a10dddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.222803780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23b23b5c-c091-4283-be82-6e1f263a2322 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.222919227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23b23b5c-c091-4283-be82-6e1f263a2322 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.224827222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1fef81d-85fd-4912-b9b1-8bf576546d92 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.226251704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765497996226209153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:472182,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1fef81d-85fd-4912-b9b1-8bf576546d92 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.228121528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba03ca01-2bae-49c4-8610-398a29f8eef8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.228197317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba03ca01-2bae-49c4-8610-398a29f8eef8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:36 addons-081397 crio[814]: time="2025-12-12 00:06:36.228639926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:825fa31ff05b6151109108bb44765720c3037acc099d0cc99ece5a494d7fe22b,PodSandboxId:8c904991200ecdd5c0f509d36d728a2e19fe7d2b3f1c8010c95e116ade98ad20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765497871450694038,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5452cb51-90f9-4bce-965c-64e57e2a83e9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d63d362311bd7f9749418aa0e97f8292a16c43f6b185cd5040b13d13cd2937,PodSandboxId:32cdf5109ec8dcac15e47a3a6c96b0d4822ea6242b901bb477b00014e952cbc6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765497801973204107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fe0ee52-bebd-4a25-a44f-86b036a8dccc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bfeb2a4c48ed303a0beee307e47a2d26cae96dc06839643f457df160f9c6f2,PodSandboxId:9d20100ccee4577ed153c986ff1f48d50be0e22ee3b2493a51a434c948d45d76,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765497681833096560,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kd757,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 874db328-9c1a-48c4-8119-7a1a97a3cf11,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:572fd150b0b3cdbf79adf495bc22111f724b93a7306aeaff042c2eeb1a8513d5,PodSandboxId:f8b58397de2aee866dad6f33d62bd0ac3346250924a219cd6732e4fe612a1231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540746764445,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7qwmp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ed76085-0a79-467c-917a-ecde6507a700,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:296863295f8c130fbde70d1da22c1071a62a7f186972626eba7018c154f0b376,PodSandboxId:4b777612bda36d9811947fc19f7979f2fc437f3721a714088004cb89ad366dfc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765497540634037507,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kxmfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d53af91f-436f-4180-a282-279e52fe615d,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86266748a701447a7bc3d4fc713e4c1556ef473197223ae231e1ead6cab2cdcd,PodSandboxId:4f522a691840e5a55229089e5ac42a1ae562fe796dc260abac381ce602f58fe1,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765497479013017722,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-fdnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8a40d6-255a-4a70-aee7-d5a6ce60f129,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0808e7e8387c7bef16883ff54ef2f2ae8dfc39be6a1ce32cfd691e4ae203f2b,PodSandboxId:c1b5ac0ad6da0f8015f757c6d3b289cce8fa504574c2ca4088a745249f081b7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,}
,Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765497470859928585,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-fpbst,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b792e8c5-5d38-4540-b39b-8c2a3f475c97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548a2242825e0a145e6c7b6a1308130225640e19cf3ad9818c0ea69de7b85735,PodSandboxId:d6d7a06f077838b822fd4eae5cd7e90ea733ace45ab036f433bde1113adc9a4
5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765497436440682806,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7df0e3-b14f-46c9-8338-f54a7557bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8605d
7a5d58687d2c0480f86784cb6029e777f6294dab4a712d624f6b42a09e,PodSandboxId:de233462b342d9e6ae89f2996678458daa65ef4c22ddad8fa4244c37173ac655,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83572eb9c0645f2bb3a855b8e24c2c8d0bd9ee3fd48c18a84369038089362134,State:CONTAINER_RUNNING,CreatedAt:1765497427174344836,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5bdddb765-rlznc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 22fee82c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee283d133145d1a0238fd1948f859650275a3aab5feb16c42eddd447501e36a,PodSandboxId:d6396506b43324b8cb21730a189864b5f6805d8eb53782386a1bd794233e5265,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765497421693560977,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-djxv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5aeb19-64d9-4433-b64e-e6cfb3654839,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dc2aaa71a240c99745dc2c923535640e4fbe2f9124e4a5c1e1d6ffb67b92d5,PodSandboxId:490a3947ca6f1e530ebcdbdc387f0ed5a13fe2e381934b43251d8ff7fae647bd,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb251c438ab2d3e27301a2b8cfe6a0fa0e2e6f8635fdb9f2bc4c98033e229f79,State:CONTAINER_RUNNING,CreatedAt:1765497420112043812,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rbpjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649f4f-f712-4939-86ae-d4e2f8
7acc0a,},Annotations:map[string]string{io.kubernetes.container.hash: a749151f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529,PodSandboxId:d4c844a5473621c53f449e79d23ffdb52f7e170e02c10edb531af6f7ac66b656,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765497415859450065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c582cdc-c50b-4759-b05c-e3b1cd92e04f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56,PodSandboxId:241bbeea7c6187605167ea4e4006bfb965b6d204d6b697587b7b6d19aec8dc00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765497402089079634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-prc7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5b3faeb-71ca-42c9-b591-4b563dca360b,},Annotations:map[string]string{io.k
ubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a,PodSandboxId:84c65d7d95ff458d5160d441f506c62cbf06d6f63e19c6282054ea7744a59101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7b
d5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765497400231113482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jwqpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd248790-eb90-4f63-bb25-4253ea30ba17,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379,PodSandboxId:32069928e35e69bd32c3e33e55169d887455d1d207eaeeb20ffd131bbb4975ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04
badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765497387431212592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff5e7fa079d80ee3f44ca1064291a116,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1,PodSandboxId:f75b7d32aa4738a8b6cdd03ba41cf48202681b33597cb90f12bd1fb4cea8cc9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765497387470438948,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9256e13e6a55b263fe4f8ec4b9de5a26,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0,PodSandboxId:78928c0146bf677b0914c273e833a2ad064db2944dce77b48dc919368ad32d79,Metadata:&ContainerMetadata{Name:kube-a
piserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765497387426744831,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85aa936c1106b9dbdb79989b017a1f8c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529,PodSandboxId:
d442318c9ea69899aae26ba77ab0141699292d4bfb353d541e6daaef29ffd624,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765497387375284315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-081397,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7544fc54cb59243312ccd602e077f24,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-co
llector/interceptors.go:74" id=ba03ca01-2bae-49c4-8610-398a29f8eef8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                         NAMESPACE
	825fa31ff05b6       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                      0                   8c904991200ec       nginx                                       default
	25d63d362311b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                    0                   32cdf5109ec8d       busybox                                     default
	e7bfeb2a4c48e       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             5 minutes ago       Running             controller                 0                   9d20100ccee45       ingress-nginx-controller-85d4c799dd-kd757   ingress-nginx
	572fd150b0b3c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   7 minutes ago       Exited              patch                      0                   f8b58397de2ae       ingress-nginx-admission-patch-7qwmp         ingress-nginx
	296863295f8c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   7 minutes ago       Exited              create                     0                   4b777612bda36       ingress-nginx-admission-create-kxmfj        ingress-nginx
	86266748a7014       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac              8 minutes ago       Running             registry-proxy             0                   4f522a691840e       registry-proxy-fdnc8                        kube-system
	c0808e7e8387c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             8 minutes ago       Running             local-path-provisioner     0                   c1b5ac0ad6da0       local-path-provisioner-648f6765c9-fpbst     local-path-storage
	548a2242825e0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns       0                   d6d7a06f07783       kube-ingress-dns-minikube                   kube-system
	b8605d7a5d586       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf               9 minutes ago       Running             cloud-spanner-emulator     0                   de233462b342d       cloud-spanner-emulator-5bdddb765-rlznc      default
	0ee283d133145       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     9 minutes ago       Running             amd-gpu-device-plugin      0                   d6396506b4332       amd-gpu-device-plugin-djxv6                 kube-system
	18dc2aaa71a24       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                     9 minutes ago       Running             nvidia-device-plugin-ctr   0                   490a3947ca6f1       nvidia-device-plugin-daemonset-rbpjs        kube-system
	636669d18a2e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             9 minutes ago       Running             storage-provisioner        0                   d4c844a547362       storage-provisioner                         kube-system
	079f9768ce55c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             9 minutes ago       Running             coredns                    0                   241bbeea7c618       coredns-66bc5c9577-prc7f                    kube-system
	7f5ed4f373cfd       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             9 minutes ago       Running             kube-proxy                 0                   84c65d7d95ff4       kube-proxy-jwqpk                            kube-system
	7ace0e7fbfc94       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             10 minutes ago      Running             kube-controller-manager    0                   f75b7d32aa473       kube-controller-manager-addons-081397       kube-system
	d8612fac71b8e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             10 minutes ago      Running             kube-scheduler             0                   32069928e35e6       kube-scheduler-addons-081397                kube-system
	712e27a28f3ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             10 minutes ago      Running             kube-apiserver             0                   78928c0146bf6       kube-apiserver-addons-081397                kube-system
	f00e427bcb7fb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             10 minutes ago      Running             etcd                       0                   d442318c9ea69       etcd-addons-081397                          kube-system
	
	
	==> coredns [079f9768ce55cad9e5a3b141d7d63c93cf2d8c3093603f43ec0f1812168ead56] <==
	[INFO] 10.244.0.10:47879 - 51319 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000635182s
	[INFO] 10.244.0.10:58882 - 8150 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.0005266s
	[INFO] 10.244.0.10:58882 - 25550 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000574024s
	[INFO] 10.244.0.10:58882 - 28727 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000517727s
	[INFO] 10.244.0.10:58882 - 15963 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000163158s
	[INFO] 10.244.0.10:58882 - 10466 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000094733s
	[INFO] 10.244.0.10:58882 - 51349 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000166941s
	[INFO] 10.244.0.10:58882 - 36516 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109317s
	[INFO] 10.244.0.10:58882 - 53225 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000296497s
	[INFO] 10.244.0.10:54686 - 57544 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000434129s
	[INFO] 10.244.0.10:54686 - 58887 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000474884s
	[INFO] 10.244.0.10:54686 - 26146 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000323233s
	[INFO] 10.244.0.10:54686 - 47350 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000487854s
	[INFO] 10.244.0.10:54686 - 41889 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000316918s
	[INFO] 10.244.0.10:54686 - 2000 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.001139362s
	[INFO] 10.244.0.10:54686 - 34193 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000304144s
	[INFO] 10.244.0.10:54686 - 53585 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000302535s
	[INFO] 10.244.0.10:38562 - 27568 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000343286s
	[INFO] 10.244.0.10:38562 - 44249 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000081648s
	[INFO] 10.244.0.10:38562 - 45264 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000171486s
	[INFO] 10.244.0.10:38562 - 32112 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000405853s
	[INFO] 10.244.0.10:38562 - 31873 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121282s
	[INFO] 10.244.0.10:38562 - 18716 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000116194s
	[INFO] 10.244.0.10:38562 - 60666 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000117547s
	[INFO] 10.244.0.10:38562 - 37431 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103834s
	
	
	==> describe nodes <==
	Name:               addons-081397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-081397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-081397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T23_56_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-081397
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-081397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:06:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:04:14 +0000   Thu, 11 Dec 2025 23:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-081397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3908Mi
	  pods:               110
	System Info:
	  Machine ID:                 132f08c043de4a3fabcb9cf58535d902
	  System UUID:                132f08c0-43de-4a3f-abcb-9cf58535d902
	  Boot ID:                    7a0deef8-e8c7-4912-a254-b2bd4a5f2873
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  default                     cloud-spanner-emulator-5bdddb765-rlznc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-kd757                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         9m45s
	  kube-system                 amd-gpu-device-plugin-djxv6                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-prc7f                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m56s
	  kube-system                 etcd-addons-081397                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-081397                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-081397                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 kube-proxy-jwqpk                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 kube-scheduler-addons-081397                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-rbpjs                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 registry-6b586f9694-f9q5b                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 registry-creds-764b6fb674-fn77c                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 registry-proxy-fdnc8                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  local-path-storage          helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  local-path-storage          local-path-provisioner-648f6765c9-fpbst                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-zbhhw                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     9m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m55s              kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-081397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-081397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-081397 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-081397 status is now: NodeReady
	  Normal  RegisteredNode           9m58s              node-controller  Node addons-081397 event: Registered Node addons-081397 in Controller
	
	
	==> dmesg <==
	[  +9.289857] kauditd_printk_skb: 11 callbacks suppressed
	[ +31.114212] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.096500] kauditd_printk_skb: 38 callbacks suppressed
	[Dec11 23:59] kauditd_printk_skb: 101 callbacks suppressed
	[  +4.064218] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.976500] kauditd_printk_skb: 88 callbacks suppressed
	[ +28.653620] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:00] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.217370] kauditd_printk_skb: 65 callbacks suppressed
	[  +8.838372] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.897990] kauditd_printk_skb: 38 callbacks suppressed
	[ +21.981224] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 00:02] kauditd_printk_skb: 20 callbacks suppressed
	[Dec12 00:03] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.025958] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.337280] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.693930] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.887520] kauditd_printk_skb: 43 callbacks suppressed
	[  +1.616504] kauditd_printk_skb: 83 callbacks suppressed
	[Dec12 00:04] kauditd_printk_skb: 89 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.912354] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.453375] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [f00e427bcb7fb04b5b35041ef6ac7bab5d56a3c501f6bdec4953b64c833c8529] <==
	{"level":"info","ts":"2025-12-11T23:57:54.563834Z","caller":"traceutil/trace.go:172","msg":"trace[62183190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.045886ms","start":"2025-12-11T23:57:54.444782Z","end":"2025-12-11T23:57:54.563827Z","steps":["trace[62183190] 'agreement among raft nodes before linearized reading'  (duration: 118.982426ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:57:54.564190Z","caller":"traceutil/trace.go:172","msg":"trace[2039299796] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"179.02635ms","start":"2025-12-11T23:57:54.385155Z","end":"2025-12-11T23:57:54.564182Z","steps":["trace[2039299796] 'process raft request'  (duration: 178.524709ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:57:54.565247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.428918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:57:54.565413Z","caller":"traceutil/trace.go:172","msg":"trace[222868242] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1035; }","duration":"119.534642ms","start":"2025-12-11T23:57:54.445807Z","end":"2025-12-11T23:57:54.565342Z","steps":["trace[222868242] 'agreement among raft nodes before linearized reading'  (duration: 119.367809ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.552158Z","caller":"traceutil/trace.go:172","msg":"trace[1638119342] linearizableReadLoop","detail":"{readStateIndex:1095; appliedIndex:1096; }","duration":"156.418496ms","start":"2025-12-11T23:58:03.395726Z","end":"2025-12-11T23:58:03.552144Z","steps":["trace[1638119342] 'read index received'  (duration: 156.415444ms)","trace[1638119342] 'applied index is now lower than readState.Index'  (duration: 2.503µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-11T23:58:03.552301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.56477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.552320Z","caller":"traceutil/trace.go:172","msg":"trace[928892129] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1059; }","duration":"156.592939ms","start":"2025-12-11T23:58:03.395722Z","end":"2025-12-11T23:58:03.552315Z","steps":["trace[928892129] 'agreement among raft nodes before linearized reading'  (duration: 156.542706ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.554244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.397714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.555824Z","caller":"traceutil/trace.go:172","msg":"trace[949728136] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1059; }","duration":"111.983139ms","start":"2025-12-11T23:58:03.443830Z","end":"2025-12-11T23:58:03.555813Z","steps":["trace[949728136] 'agreement among raft nodes before linearized reading'  (duration: 110.370385ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-11T23:58:03.554796Z","caller":"traceutil/trace.go:172","msg":"trace[1547687040] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"112.058069ms","start":"2025-12-11T23:58:03.442727Z","end":"2025-12-11T23:58:03.554786Z","steps":["trace[1547687040] 'process raft request'  (duration: 111.966352ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:58:03.555039Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.923532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:58:03.556516Z","caller":"traceutil/trace.go:172","msg":"trace[1507526217] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"113.405565ms","start":"2025-12-11T23:58:03.443103Z","end":"2025-12-11T23:58:03.556508Z","steps":["trace[1507526217] 'agreement among raft nodes before linearized reading'  (duration: 111.826397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393302Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.001392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-12-11T23:59:39.393692Z","caller":"traceutil/trace.go:172","msg":"trace[1235881685] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1239; }","duration":"171.464156ms","start":"2025-12-11T23:59:39.222198Z","end":"2025-12-11T23:59:39.393662Z","steps":["trace[1235881685] 'range keys from in-memory index tree'  (duration: 170.767828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-11T23:59:39.393736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.832598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-11T23:59:39.393801Z","caller":"traceutil/trace.go:172","msg":"trace[1862727742] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1239; }","duration":"240.918211ms","start":"2025-12-11T23:59:39.152870Z","end":"2025-12-11T23:59:39.393789Z","steps":["trace[1862727742] 'range keys from in-memory index tree'  (duration: 240.669473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:01:18.494075Z","caller":"traceutil/trace.go:172","msg":"trace[729500464] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"106.783316ms","start":"2025-12-12T00:01:18.387266Z","end":"2025-12-12T00:01:18.494049Z","steps":["trace[729500464] 'process raft request'  (duration: 106.410306ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:02:27.300606Z","caller":"traceutil/trace.go:172","msg":"trace[636598247] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"178.765669ms","start":"2025-12-12T00:02:27.121805Z","end":"2025-12-12T00:02:27.300571Z","steps":["trace[636598247] 'process raft request'  (duration: 178.598198ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302340Z","caller":"traceutil/trace.go:172","msg":"trace[1017299845] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1944; }","duration":"211.137553ms","start":"2025-12-12T00:03:50.091151Z","end":"2025-12-12T00:03:50.302289Z","steps":["trace[1017299845] 'read index received'  (duration: 211.129428ms)","trace[1017299845] 'applied index is now lower than readState.Index'  (duration: 7.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:03:50.302716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.444735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:03:50.302750Z","caller":"traceutil/trace.go:172","msg":"trace[412680698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1831; }","duration":"211.595679ms","start":"2025-12-12T00:03:50.091146Z","end":"2025-12-12T00:03:50.302742Z","steps":["trace[412680698] 'agreement among raft nodes before linearized reading'  (duration: 211.378448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:03:50.302919Z","caller":"traceutil/trace.go:172","msg":"trace[333147361] transaction","detail":"{read_only:false; response_revision:1832; number_of_response:1; }","duration":"278.806483ms","start":"2025-12-12T00:03:50.024100Z","end":"2025-12-12T00:03:50.302907Z","steps":["trace[333147361] 'process raft request'  (duration: 278.330678ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:06:28.833586Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1445}
	{"level":"info","ts":"2025-12-12T00:06:28.938406Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1445,"took":"103.595743ms","hash":3397286304,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4149248,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-12-12T00:06:28.938497Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3397286304,"revision":1445,"compact-revision":-1}
	
	
	==> kernel <==
	 00:06:36 up 10 min,  0 users,  load average: 1.76, 1.34, 0.94
	Linux addons-081397 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [712e27a28f3cad2b4f2d9ada39dd5acf3548449c6f806d4eee11a16e2420f0a0] <==
	E1211 23:57:51.416788       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:51.421606       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	E1211 23:57:51.423760       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.158.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.158.20:443: connect: connection refused" logger="UnhandledError"
	I1211 23:57:51.587823       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:03:28.639031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47840: use of closed network connection
	E1212 00:03:28.907630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:47866: use of closed network connection
	I1212 00:03:38.672372       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.240.212"}
	I1212 00:03:52.460336       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:03:56.689684       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:03:56.942418       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.190.115"}
	I1212 00:04:08.084581       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 00:04:26.464334       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.464735       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.589812       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.589919       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.683731       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.683804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.703860       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.704083       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:04:26.747552       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:04:26.747633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 00:04:27.684485       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 00:04:27.749684       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1212 00:04:27.811321       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1212 00:06:30.996030       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [7ace0e7fbfc948bd5e100ba019d75d2f9bb47a8b115c5c7dad8a28c41e6b41d1] <==
	I1212 00:04:40.304705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:04:40.323779       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 00:04:40.324334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1212 00:04:44.950861       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:44.953327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:04:46.383547       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:46.384898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:04:48.248271       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:48.249716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:04:58.780298       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:04:58.782292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:00.148817       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:00.150772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:08.432389       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:08.433992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:28.580816       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:28.581925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:42.387829       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:42.393861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:05:56.480899       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:05:56.482116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:18.044047       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:18.045726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 00:06:28.574687       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 00:06:28.576064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f5ed4f373cfd08eac038fe7ceb31cf6f339cc828d5946bcfd896e3b2ba9b44a] <==
	I1211 23:56:41.129554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 23:56:41.230792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 23:56:41.230832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.2"]
	E1211 23:56:41.230926       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:56:41.372420       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1211 23:56:41.372474       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:56:41.372505       1 server_linux.go:132] "Using iptables Proxier"
	I1211 23:56:41.403791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:56:41.404681       1 server.go:527] "Version info" version="v1.34.2"
	I1211 23:56:41.404798       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:56:41.409627       1 config.go:200] "Starting service config controller"
	I1211 23:56:41.409659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 23:56:41.409674       1 config.go:106] "Starting endpoint slice config controller"
	I1211 23:56:41.409677       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 23:56:41.409687       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 23:56:41.409690       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 23:56:41.421538       1 config.go:309] "Starting node config controller"
	I1211 23:56:41.421577       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 23:56:41.421584       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 23:56:41.510201       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 23:56:41.510238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 23:56:41.510294       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d8612fac71b8ea6c3af6f51ed76d7c509987964682f7fec8ee90dfdf32011379] <==
	E1211 23:56:31.058088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:31.058174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:31.058339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.058520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:31.058583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.878612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1211 23:56:31.916337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1211 23:56:31.929867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1211 23:56:31.934421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1211 23:56:31.956823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1211 23:56:31.994674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1211 23:56:32.004329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1211 23:56:32.010178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1211 23:56:32.026980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1211 23:56:32.052788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1211 23:56:32.154842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1211 23:56:32.220469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1211 23:56:32.267618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1211 23:56:32.308064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1211 23:56:32.344466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1211 23:56:32.371737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1211 23:56:32.397888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1211 23:56:32.548714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1211 23:56:32.628885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1211 23:56:34.946153       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:06:05 addons-081397 kubelet[1522]: E1212 00:06:05.207033    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765497965206399614 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:05 addons-081397 kubelet[1522]: E1212 00:06:05.207106    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765497965206399614 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:06 addons-081397 kubelet[1522]: I1212 00:06:06.547205    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rbpjs" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:11 addons-081397 kubelet[1522]: E1212 00:06:11.548533    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-zbhhw" podUID="0d366411-2739-4499-8990-e9c2c974d30b"
	Dec 12 00:06:12 addons-081397 kubelet[1522]: I1212 00:06:12.551349    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-f9q5b" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:12 addons-081397 kubelet[1522]: E1212 00:06:12.552770    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-f9q5b" podUID="96c372a4-ae7e-4df5-9a48-525fc42f8bc5"
	Dec 12 00:06:15 addons-081397 kubelet[1522]: E1212 00:06:15.210420    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765497975209440461 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:15 addons-081397 kubelet[1522]: E1212 00:06:15.210447    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765497975209440461 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:15 addons-081397 kubelet[1522]: E1212 00:06:15.957746    1522 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605"
	Dec 12 00:06:15 addons-081397 kubelet[1522]: E1212 00:06:15.957820    1522 kuberuntime_image.go:43] "Failed to pull image" err="initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605"
	Dec 12 00:06:15 addons-081397 kubelet[1522]: E1212 00:06:15.958259    1522 kuberuntime_manager.go:1449] "Unhandled Error" err="container registry-creds start failed in pod registry-creds-764b6fb674-fn77c_kube-system(4d72d75e-437b-4632-9fb1-3a7067c23d39): ErrImagePull: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:06:15 addons-081397 kubelet[1522]: E1212 00:06:15.958343    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with ErrImagePull: \"initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-creds-764b6fb674-fn77c" podUID="4d72d75e-437b-4632-9fb1-3a7067c23d39"
	Dec 12 00:06:16 addons-081397 kubelet[1522]: I1212 00:06:16.389017    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-fn77c" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:16 addons-081397 kubelet[1522]: E1212 00:06:16.391779    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\\\": ErrImagePull: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-creds-764b6fb674-fn77c" podUID="4d72d75e-437b-4632-9fb1-3a7067c23d39"
	Dec 12 00:06:23 addons-081397 kubelet[1522]: E1212 00:06:23.549115    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-zbhhw" podUID="0d366411-2739-4499-8990-e9c2c974d30b"
	Dec 12 00:06:25 addons-081397 kubelet[1522]: E1212 00:06:25.213820    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765497985213269325 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:25 addons-081397 kubelet[1522]: E1212 00:06:25.213915    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765497985213269325 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:25 addons-081397 kubelet[1522]: I1212 00:06:25.546618    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-f9q5b" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:25 addons-081397 kubelet[1522]: E1212 00:06:25.548607    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-f9q5b" podUID="96c372a4-ae7e-4df5-9a48-525fc42f8bc5"
	Dec 12 00:06:30 addons-081397 kubelet[1522]: I1212 00:06:30.546810    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-fn77c" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:35 addons-081397 kubelet[1522]: E1212 00:06:35.217029    1522 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765497995216323610 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:35 addons-081397 kubelet[1522]: E1212 00:06:35.217425    1522 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765497995216323610 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:472182} inodes_used:{value:162}}"
	Dec 12 00:06:36 addons-081397 kubelet[1522]: I1212 00:06:36.549736    1522 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-f9q5b" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:06:36 addons-081397 kubelet[1522]: E1212 00:06:36.554899    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-f9q5b" podUID="96c372a4-ae7e-4df5-9a48-525fc42f8bc5"
	Dec 12 00:06:36 addons-081397 kubelet[1522]: E1212 00:06:36.572725    1522 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-zbhhw" podUID="0d366411-2739-4499-8990-e9c2c974d30b"
	
	
	==> storage-provisioner [636669d18a2e5390ba8add1361095ce41ca02d0d75935feae4d0d47ff213f529] <==
	W1212 00:06:11.551821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:13.558366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:13.568225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:15.572602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:15.580742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:17.585483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:17.595857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:19.600492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:19.608345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:21.612449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:21.625204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:23.634508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:23.644571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:25.653153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:25.663525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:27.669253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:27.680033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:29.685668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:29.697252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:31.701828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:31.714916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:33.721914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:33.729287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:35.735254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:06:35.747465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-081397 -n addons-081397
helpers_test.go:270: (dbg) Run:  kubectl --context addons-081397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc yakd-dashboard-5ff678cb9-zbhhw
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-081397 describe pod test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc yakd-dashboard-5ff678cb9-zbhhw
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-081397 describe pod test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc yakd-dashboard-5ff678cb9-zbhhw: exit status 1 (113.709341ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjvf5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sjvf5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kxmfj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7qwmp" not found
	Error from server (NotFound): pods "registry-6b586f9694-f9q5b" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-fn77c" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-zbhhw" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-081397 describe pod test-local-path ingress-nginx-admission-create-kxmfj ingress-nginx-admission-patch-7qwmp registry-6b586f9694-f9q5b registry-creds-764b6fb674-fn77c helper-pod-create-pvc-b91f3a2a-d76e-4c97-840a-999ee89274cc yakd-dashboard-5ff678cb9-zbhhw: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable yakd --alsologtostderr -v=1: (6.051836298s)
--- FAIL: TestAddons/parallel/Yakd (129.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843156 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843156 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843156 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843156 --alsologtostderr -v=1] stderr:
I1212 00:22:20.393115  202883 out.go:360] Setting OutFile to fd 1 ...
I1212 00:22:20.393422  202883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:20.393434  202883 out.go:374] Setting ErrFile to fd 2...
I1212 00:22:20.393438  202883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:20.393668  202883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:22:20.394069  202883 mustload.go:66] Loading cluster: functional-843156
I1212 00:22:20.394694  202883 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:20.397298  202883 host.go:66] Checking if "functional-843156" exists ...
I1212 00:22:20.397571  202883 api_server.go:166] Checking apiserver status ...
I1212 00:22:20.397637  202883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:22:20.402722  202883 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:20.403667  202883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:20.403709  202883 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:20.403927  202883 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:20.571739  202883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/13996/cgroup
W1212 00:22:20.599408  202883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/13996/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 00:22:20.599514  202883 ssh_runner.go:195] Run: ls
I1212 00:22:20.617773  202883 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8441/healthz ...
I1212 00:22:20.629055  202883 api_server.go:279] https://192.168.39.201:8441/healthz returned 200:
ok
W1212 00:22:20.629146  202883 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1212 00:22:20.629342  202883 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:20.629367  202883 addons.go:70] Setting dashboard=true in profile "functional-843156"
I1212 00:22:20.629373  202883 addons.go:239] Setting addon dashboard=true in "functional-843156"
I1212 00:22:20.629400  202883 host.go:66] Checking if "functional-843156" exists ...
I1212 00:22:20.633381  202883 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1212 00:22:20.634687  202883 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1212 00:22:20.635966  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1212 00:22:20.635985  202883 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1212 00:22:20.638776  202883 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:20.639225  202883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:20.639251  202883 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:20.639393  202883 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:20.881890  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1212 00:22:20.881954  202883 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1212 00:22:21.041574  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1212 00:22:21.041602  202883 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1212 00:22:21.125748  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1212 00:22:21.125785  202883 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1212 00:22:21.234161  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1212 00:22:21.234188  202883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1212 00:22:21.276646  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1212 00:22:21.276673  202883 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1212 00:22:21.325610  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1212 00:22:21.325645  202883 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1212 00:22:21.370779  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1212 00:22:21.370810  202883 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1212 00:22:21.419798  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1212 00:22:21.419827  202883 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1212 00:22:21.455746  202883 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:22:21.455779  202883 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1212 00:22:21.498157  202883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:22:22.944229  202883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.446011906s)
I1212 00:22:22.946031  202883 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-843156 addons enable metrics-server

                                                
                                                
I1212 00:22:22.947143  202883 addons.go:202] Writing out "functional-843156" config to set dashboard=true...
W1212 00:22:22.947516  202883 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1212 00:22:22.948659  202883 kapi.go:59] client config for functional-843156: &rest.Config{Host:"https://192.168.39.201:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.key", CAFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1212 00:22:22.949378  202883 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1212 00:22:22.949404  202883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1212 00:22:22.949412  202883 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1212 00:22:22.949419  202883 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1212 00:22:22.949426  202883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1212 00:22:22.979124  202883 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  6acf7324-7185-46a7-99d1-d45fdedd1ffa 578 0 2025-12-12 00:22:22 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-12 00:22:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.101.128.172,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.101.128.172],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1212 00:22:22.979316  202883 out.go:285] * Launching proxy ...
* Launching proxy ...
I1212 00:22:22.979421  202883 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-843156 proxy --port 36195]
I1212 00:22:22.981039  202883 dashboard.go:159] Waiting for kubectl to output host:port ...
I1212 00:22:23.037362  202883 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1212 00:22:23.037405  202883 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1212 00:22:23.056495  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[434064b4-f2d6-4cea-af43-8371409d553d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4780 TLS:<nil>}
I1212 00:22:23.056645  202883 retry.go:31] will retry after 93.32µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.061038  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc705996-ee2e-4a92-b08a-7012bfc33401] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc00153db80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e780 TLS:<nil>}
I1212 00:22:23.061151  202883 retry.go:31] will retry after 139.985µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.068386  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c747bba4-1ac1-4055-a101-b3499f1897db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1212 00:22:23.068583  202883 retry.go:31] will retry after 141.713µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.076931  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d08f5c62-4a92-450a-97bc-0d7e8ffcf746] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0007a58c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e8c0 TLS:<nil>}
I1212 00:22:23.077017  202883 retry.go:31] will retry after 265.744µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.086871  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[85e3ffa5-1112-4e1a-bd5b-b9d042666d87] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4a00 TLS:<nil>}
I1212 00:22:23.086970  202883 retry.go:31] will retry after 561.282µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.096807  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[67e47ec5-3b31-43f0-abdc-2c9c3fcc5200] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc00153dd00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ea00 TLS:<nil>}
I1212 00:22:23.096905  202883 retry.go:31] will retry after 478.975µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.103979  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[19e7cee5-89f1-4110-bf8f-6a2ae07e799b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1212 00:22:23.104055  202883 retry.go:31] will retry after 947.072µs: Temporary Error: unexpected response code: 503
I1212 00:22:23.114051  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3858b861-2c73-4f1f-a303-0fe3830d9836] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0007a5980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014edc0 TLS:<nil>}
I1212 00:22:23.114191  202883 retry.go:31] will retry after 1.153275ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.123241  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c49e09f3-8d49-4a64-bff6-48422c5054d5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4b40 TLS:<nil>}
I1212 00:22:23.123321  202883 retry.go:31] will retry after 2.244863ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.133940  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bbc07ea-a28e-481b-b7aa-7e19c62effbc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0007a5a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ef00 TLS:<nil>}
I1212 00:22:23.134037  202883 retry.go:31] will retry after 4.349563ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.144170  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6494c095-0ef6-4476-9d8f-2dae42c4a230] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4c80 TLS:<nil>}
I1212 00:22:23.144265  202883 retry.go:31] will retry after 5.502399ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.159146  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d7aaf18-e410-405c-a7ef-2415afa35558] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0007a5b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f040 TLS:<nil>}
I1212 00:22:23.159228  202883 retry.go:31] will retry after 4.895207ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.169989  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f389c40d-dc8c-4862-b95b-3acccc41842e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc00153de80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4dc0 TLS:<nil>}
I1212 00:22:23.170057  202883 retry.go:31] will retry after 15.452672ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.197213  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b9b70ca-b698-4ce5-b86f-53e9df4df297] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1212 00:22:23.197300  202883 retry.go:31] will retry after 11.612686ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.215420  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[23981eb3-a052-47a7-bdd2-944192c6f390] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc001740000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f180 TLS:<nil>}
I1212 00:22:23.215546  202883 retry.go:31] will retry after 22.73801ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.244892  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a1e7b7a-069a-4b77-b64a-21635e415262] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0007a5c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1212 00:22:23.244980  202883 retry.go:31] will retry after 38.3738ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.291795  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f2b0a04-d7eb-4eb3-99d8-2f281e192088] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4f00 TLS:<nil>}
I1212 00:22:23.291892  202883 retry.go:31] will retry after 63.572983ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.360416  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e54eebf-f58c-4858-a70a-1b3426c8febb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d8800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f2c0 TLS:<nil>}
I1212 00:22:23.360525  202883 retry.go:31] will retry after 80.964034ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.447719  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[74f3847b-2177-4280-84bf-fd6a5c39113f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0016d88c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f400 TLS:<nil>}
I1212 00:22:23.447786  202883 retry.go:31] will retry after 75.169425ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.528376  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b039907-2fa3-4a5a-8b2c-a44d2b6a037e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc0007a5d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f680 TLS:<nil>}
I1212 00:22:23.528441  202883 retry.go:31] will retry after 191.019521ms: Temporary Error: unexpected response code: 503
I1212 00:22:23.723440  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b55b12f4-2d8c-4a93-b49b-8adc34f65021] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:23 GMT]] Body:0xc001740140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d5040 TLS:<nil>}
I1212 00:22:23.723546  202883 retry.go:31] will retry after 469.376655ms: Temporary Error: unexpected response code: 503
I1212 00:22:24.198412  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efe9bf10-79bf-4150-934c-8b924046792a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:24 GMT]] Body:0xc0007a5e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1212 00:22:24.198514  202883 retry.go:31] will retry after 558.275256ms: Temporary Error: unexpected response code: 503
I1212 00:22:24.763858  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d25513f-4478-4411-9c2d-53f788e40289] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:24 GMT]] Body:0xc001740280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d5180 TLS:<nil>}
I1212 00:22:24.763944  202883 retry.go:31] will retry after 477.340094ms: Temporary Error: unexpected response code: 503
I1212 00:22:25.245380  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1a35eb50-9e35-482b-bd42-76b64ab4e8ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:25 GMT]] Body:0xc0016d89c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1212 00:22:25.245476  202883 retry.go:31] will retry after 857.011559ms: Temporary Error: unexpected response code: 503
I1212 00:22:26.107347  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e34eab57-e23d-4bcf-a409-68c68bb33392] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:26 GMT]] Body:0xc0007a5f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f7c0 TLS:<nil>}
I1212 00:22:26.107415  202883 retry.go:31] will retry after 1.084380635s: Temporary Error: unexpected response code: 503
I1212 00:22:27.196392  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe5d0ae6-f5c1-43b4-ae41-75464e02f0ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:27 GMT]] Body:0xc0016d8ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d52c0 TLS:<nil>}
I1212 00:22:27.196501  202883 retry.go:31] will retry after 1.798570783s: Temporary Error: unexpected response code: 503
I1212 00:22:29.000061  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c392577e-10bf-4774-b302-a21fc56e8f69] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:28 GMT]] Body:0xc0017403c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f900 TLS:<nil>}
I1212 00:22:29.000129  202883 retry.go:31] will retry after 4.567598273s: Temporary Error: unexpected response code: 503
I1212 00:22:33.571879  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f80b4bb-d96f-43fe-aa7f-3efa075c5fb6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:33 GMT]] Body:0xc001740440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014fa40 TLS:<nil>}
I1212 00:22:33.571971  202883 retry.go:31] will retry after 7.664257442s: Temporary Error: unexpected response code: 503
I1212 00:22:41.241139  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17dd34ab-04b4-41c9-a988-157d1a1e9c01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:41 GMT]] Body:0xc0017c8080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014fb80 TLS:<nil>}
I1212 00:22:41.241236  202883 retry.go:31] will retry after 8.377782197s: Temporary Error: unexpected response code: 503
I1212 00:22:49.627121  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[32fa1a7b-0cca-4876-ba62-4e4367be1ec1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:22:49 GMT]] Body:0xc001740500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d5540 TLS:<nil>}
I1212 00:22:49.627191  202883 retry.go:31] will retry after 14.984132398s: Temporary Error: unexpected response code: 503
I1212 00:23:04.616853  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94bfdb43-a2a3-4e4f-a12c-6d2624a1d72d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:23:04 GMT]] Body:0xc0017c8180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014fcc0 TLS:<nil>}
I1212 00:23:04.616955  202883 retry.go:31] will retry after 14.01436125s: Temporary Error: unexpected response code: 503
I1212 00:23:18.638793  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c0f47f5-cd68-4481-8005-2fec72c7400a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:23:18 GMT]] Body:0xc0016d8d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1212 00:23:18.638896  202883 retry.go:31] will retry after 41.138372522s: Temporary Error: unexpected response code: 503
I1212 00:23:59.787199  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7303116a-749f-4492-8afb-86cdb6767cca] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:23:59 GMT]] Body:0xc0016d8e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014fe00 TLS:<nil>}
I1212 00:23:59.787278  202883 retry.go:31] will retry after 28.029214197s: Temporary Error: unexpected response code: 503
I1212 00:24:27.823330  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[10fa6dac-5815-4232-84d1-23451c0e67aa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:24:27 GMT]] Body:0xc000834040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e000 TLS:<nil>}
I1212 00:24:27.823427  202883 retry.go:31] will retry after 1m16.491097506s: Temporary Error: unexpected response code: 503
I1212 00:25:44.320065  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f908936c-2682-4018-b9d7-b6480d7ccfbe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:25:44 GMT]] Body:0xc0016d80c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002a6b40 TLS:<nil>}
I1212 00:25:44.320168  202883 retry.go:31] will retry after 1m15.475942462s: Temporary Error: unexpected response code: 503
I1212 00:26:59.808312  202883 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5132095-dfc5-4003-8888-5ce0f034156f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 12 Dec 2025 00:26:59 GMT]] Body:0xc0016d8080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002a6dc0 TLS:<nil>}
I1212 00:26:59.808444  202883 retry.go:31] will retry after 1m24.716481707s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-843156 -n functional-843156
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 logs -n 25: (1.257619151s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-843156 ssh sudo cat /etc/ssl/certs/1902722.pem                                                                                                    │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ ssh            │ functional-843156 ssh sudo cat /usr/share/ca-certificates/1902722.pem                                                                                        │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls                                                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ ssh            │ functional-843156 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image load --daemon kicbase/echo-server:functional-843156 --alsologtostderr                                                                │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ ssh            │ functional-843156 ssh sudo cat /etc/test/nested/copy/190272/hosts                                                                                            │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls                                                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image load --daemon kicbase/echo-server:functional-843156 --alsologtostderr                                                                │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls                                                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image save kicbase/echo-server:functional-843156 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image rm kicbase/echo-server:functional-843156 --alsologtostderr                                                                           │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls                                                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls                                                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image save --daemon kicbase/echo-server:functional-843156 --alsologtostderr                                                                │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ update-context │ functional-843156 update-context --alsologtostderr -v=2                                                                                                      │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ update-context │ functional-843156 update-context --alsologtostderr -v=2                                                                                                      │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ update-context │ functional-843156 update-context --alsologtostderr -v=2                                                                                                      │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls --format short --alsologtostderr                                                                                                  │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls --format yaml --alsologtostderr                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ ssh            │ functional-843156 ssh pgrep buildkitd                                                                                                                        │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │                     │
	│ image          │ functional-843156 image build -t localhost/my-image:functional-843156 testdata/build --alsologtostderr                                                       │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls                                                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls --format json --alsologtostderr                                                                                                   │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	│ image          │ functional-843156 image ls --format table --alsologtostderr                                                                                                  │ functional-843156 │ jenkins │ v1.37.0 │ 12 Dec 25 00:22 UTC │ 12 Dec 25 00:22 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:22:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:22:20.250177  202861 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:22:20.250475  202861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:20.250490  202861 out.go:374] Setting ErrFile to fd 2...
	I1212 00:22:20.250497  202861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:20.250801  202861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:22:20.251350  202861 out.go:368] Setting JSON to false
	I1212 00:22:20.252547  202861 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21884,"bootTime":1765477056,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:22:20.252636  202861 start.go:143] virtualization: kvm guest
	I1212 00:22:20.254696  202861 out.go:179] * [functional-843156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:22:20.256256  202861 notify.go:221] Checking for updates...
	I1212 00:22:20.256288  202861 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:22:20.257891  202861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:22:20.260141  202861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:22:20.261505  202861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:22:20.262768  202861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:22:20.264297  202861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:22:20.266155  202861 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:22:20.266771  202861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:22:20.303021  202861 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 00:22:20.304484  202861 start.go:309] selected driver: kvm2
	I1212 00:22:20.304507  202861 start.go:927] validating driver "kvm2" against &{Name:functional-843156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-843156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:22:20.304635  202861 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:22:20.305566  202861 cni.go:84] Creating CNI manager for ""
	I1212 00:22:20.305647  202861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:22:20.305717  202861 start.go:353] cluster config:
	{Name:functional-843156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-843156 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:22:20.307555  202861 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.148953386Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499241148922223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242134,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d82d49a-fc04-43fa-9afe-919f75622d2c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.149950944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecfce2a4-5a08-442b-a71e-51cc587059e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.150032571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecfce2a4-5a08-442b-a71e-51cc587059e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.150389420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a11df105f6acf5596bd21cae8db921efa4c7cfba344af01b08b3cc5793ca0d8b,PodSandboxId:6bbc6cb9cb1b3d7fbd23959cccb394daab6a88b9e0d005358f2b4a75f0af15b1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499033093442111,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-krnrx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03389d4f-808d-4cd0-8294-bdb7818ea8cc,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc32ea7b305605c421da12463c4b3739e8ddd51d40c59b71d04f78601d63273,PodSandboxId:a56975d0e2da9b2c419b0d5a6af8403c0042d6a8e2dac9da2ff043f5b4db0dbe,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765498950624583205,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f311db17-37c6-43c5-ba25-ae0112ca44d1,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91690a6bd7af5ccdabfd2bf34a30f7f59579cefeb47f6693dc3195d13a15fcc7,PodSandboxId:91950b5d27b2be01dd77d423cc9afc29c52cac31187093386a58216968cf0e9b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765498936008097512,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc79ceef-e1e5-4e84-8e50-08cddb4210cc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016acc37605a4a31a23313851936a8eb7d0c3dec202b957b9d9f9cb94058316e,PodSandboxId:0d9ad533050d1af5f537da0af7a5a77fb452dedbf57f00176c77d62372605c72,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931833411206,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-smtsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61053f06-f67e-4c9c-aad9-822bceb0a15b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcc387937c5bd8a432b9d351a442d7c9f5a803cc0c13fea0cb29b824fcd3e4,PodSandboxId:a931fdbf96a96db2e1f0f9a59aaf78db847a0b15ed42c2dfcd12013c2a8eefa8,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931656978133,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-v2pcs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 502d0fce-f1bd-4bb8-a19c-3d14d8ff443b,},Annotations:map[string]string
{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb75eace1ae60bbd7be7eb094825319878dc47766e111d74d2197ca4f6047ff3,PodSandboxId:054c0c820f83954e48fa7f7c365b91638566e51e42d9a75507e91f1691cd2262,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765498914596861311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b07950-5b67-4c6a-b247-adfff0295856,},Annotations:map[string]string{io.kubernet
es.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2545d1c4104badfb2c899565277d86acde4fa84487cbc911a5eed7e620e5f1ca,PodSandboxId:ec87cf789aa67bad53b6953d01b63b0a7db9b73c69a78776052d7195d4ccb949,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765498914509303568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-p767c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaced64d-9231-4a1d-b5a5-2c53fad7adc2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa86415d95833529a5a348090e88d506d3fe176f4a55d0ce6754e16b491486ba,PodSandboxId:f95e015f6841c85a8def7e96739daebfcc57fcfd64a17e86e41172faab56c8b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,State:CONTAINER_RUNNING,CreatedAt:1765498914469492453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sb4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dceb9ad-8ffa-4cff-9e3e-42b76326accf,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cecfbdb350cefa94a1fe7105f86d02286dd03cbc95015d680495ed4626d0faea,PodSandboxId:4077a
acf17dd514391197da135051421f22e0c0c6864ab0cd0d1f5024582848e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765498914306389294,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bklgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508c8a96-ef4f-46ae-9595-e3d79f5aaad1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d6f457c9deec9b0a914fc5749fede48335396d3a12696a9d2196e8ee90272c,PodSandboxId:bbf68618157c84b52c5e847693f602924c5a50
2723c92cac87113ed52dd4c98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765498901170974533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741d1439a301fd6bce8381a66d4f311,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:682bcf3f5717ce12e9f08af35593b7951fe49e
81430c77c7696b229c2b256134,PodSandboxId:004aa8039d54675e34e1de4eb62d95775ec9f2df3ccebd71236681c77df6be29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765498901151918682,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80662db746c78d6588e618be59490c36,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11564ab7e066433a2abd8ab634601a55afdbcc5c42c04013655b2e5137957278,PodSandboxId:3f33764e6a00259a05922cc2d27f5fc67277b44d149af6769e412d2741c2f0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765498901155189584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676a46725761f088caa5b8e3aa507704,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3eaddaddeb434de6de5c05814e52089a9d858f295a9a189ee98cef9a02543,PodSandboxId:7f3d9c555b4e17a6a68d5e8d0ade8fdb0581ea8709021cedbf0f9b8a1f980c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765498901001446579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7045f88ec16042ef897bddab2ec31bb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{
\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecfce2a4-5a08-442b-a71e-51cc587059e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.189989615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47b543cb-ee9e-46ce-bd7b-38e666bdf576 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.190087485Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47b543cb-ee9e-46ce-bd7b-38e666bdf576 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.191728684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61e2b0bd-00d8-4a51-a410-4cbba6dbe64f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.193763514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499241193665360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242134,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61e2b0bd-00d8-4a51-a410-4cbba6dbe64f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.195055668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a28b95d1-4f7d-4c77-b6be-9a1120a531cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.195189232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a28b95d1-4f7d-4c77-b6be-9a1120a531cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.195481837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a11df105f6acf5596bd21cae8db921efa4c7cfba344af01b08b3cc5793ca0d8b,PodSandboxId:6bbc6cb9cb1b3d7fbd23959cccb394daab6a88b9e0d005358f2b4a75f0af15b1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499033093442111,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-krnrx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03389d4f-808d-4cd0-8294-bdb7818ea8cc,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc32ea7b305605c421da12463c4b3739e8ddd51d40c59b71d04f78601d63273,PodSandboxId:a56975d0e2da9b2c419b0d5a6af8403c0042d6a8e2dac9da2ff043f5b4db0dbe,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765498950624583205,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f311db17-37c6-43c5-ba25-ae0112ca44d1,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91690a6bd7af5ccdabfd2bf34a30f7f59579cefeb47f6693dc3195d13a15fcc7,PodSandboxId:91950b5d27b2be01dd77d423cc9afc29c52cac31187093386a58216968cf0e9b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765498936008097512,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc79ceef-e1e5-4e84-8e50-08cddb4210cc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016acc37605a4a31a23313851936a8eb7d0c3dec202b957b9d9f9cb94058316e,PodSandboxId:0d9ad533050d1af5f537da0af7a5a77fb452dedbf57f00176c77d62372605c72,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931833411206,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-smtsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61053f06-f67e-4c9c-aad9-822bceb0a15b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcc387937c5bd8a432b9d351a442d7c9f5a803cc0c13fea0cb29b824fcd3e4,PodSandboxId:a931fdbf96a96db2e1f0f9a59aaf78db847a0b15ed42c2dfcd12013c2a8eefa8,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931656978133,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-v2pcs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 502d0fce-f1bd-4bb8-a19c-3d14d8ff443b,},Annotations:map[string]string
{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb75eace1ae60bbd7be7eb094825319878dc47766e111d74d2197ca4f6047ff3,PodSandboxId:054c0c820f83954e48fa7f7c365b91638566e51e42d9a75507e91f1691cd2262,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765498914596861311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b07950-5b67-4c6a-b247-adfff0295856,},Annotations:map[string]string{io.kubernet
es.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2545d1c4104badfb2c899565277d86acde4fa84487cbc911a5eed7e620e5f1ca,PodSandboxId:ec87cf789aa67bad53b6953d01b63b0a7db9b73c69a78776052d7195d4ccb949,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765498914509303568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-p767c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaced64d-9231-4a1d-b5a5-2c53fad7adc2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa86415d95833529a5a348090e88d506d3fe176f4a55d0ce6754e16b491486ba,PodSandboxId:f95e015f6841c85a8def7e96739daebfcc57fcfd64a17e86e41172faab56c8b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,State:CONTAINER_RUNNING,CreatedAt:1765498914469492453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sb4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dceb9ad-8ffa-4cff-9e3e-42b76326accf,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cecfbdb350cefa94a1fe7105f86d02286dd03cbc95015d680495ed4626d0faea,PodSandboxId:4077a
acf17dd514391197da135051421f22e0c0c6864ab0cd0d1f5024582848e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765498914306389294,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bklgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508c8a96-ef4f-46ae-9595-e3d79f5aaad1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d6f457c9deec9b0a914fc5749fede48335396d3a12696a9d2196e8ee90272c,PodSandboxId:bbf68618157c84b52c5e847693f602924c5a50
2723c92cac87113ed52dd4c98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765498901170974533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741d1439a301fd6bce8381a66d4f311,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:682bcf3f5717ce12e9f08af35593b7951fe49e
81430c77c7696b229c2b256134,PodSandboxId:004aa8039d54675e34e1de4eb62d95775ec9f2df3ccebd71236681c77df6be29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765498901151918682,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80662db746c78d6588e618be59490c36,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11564ab7e066433a2abd8ab634601a55afdbcc5c42c04013655b2e5137957278,PodSandboxId:3f33764e6a00259a05922cc2d27f5fc67277b44d149af6769e412d2741c2f0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765498901155189584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676a46725761f088caa5b8e3aa507704,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3eaddaddeb434de6de5c05814e52089a9d858f295a9a189ee98cef9a02543,PodSandboxId:7f3d9c555b4e17a6a68d5e8d0ade8fdb0581ea8709021cedbf0f9b8a1f980c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765498901001446579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7045f88ec16042ef897bddab2ec31bb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{
\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a28b95d1-4f7d-4c77-b6be-9a1120a531cc name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.229443707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=821802b7-1c42-4cb1-80d1-40d8fba80d93 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.229527889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=821802b7-1c42-4cb1-80d1-40d8fba80d93 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.231058734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5061828e-1c64-4442-be04-000535573781 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.232719842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499241232687771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242134,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5061828e-1c64-4442-be04-000535573781 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.233925030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13cad733-521f-42cd-8a56-198239f6b1d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.234003539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13cad733-521f-42cd-8a56-198239f6b1d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.234376893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a11df105f6acf5596bd21cae8db921efa4c7cfba344af01b08b3cc5793ca0d8b,PodSandboxId:6bbc6cb9cb1b3d7fbd23959cccb394daab6a88b9e0d005358f2b4a75f0af15b1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499033093442111,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-krnrx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03389d4f-808d-4cd0-8294-bdb7818ea8cc,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc32ea7b305605c421da12463c4b3739e8ddd51d40c59b71d04f78601d63273,PodSandboxId:a56975d0e2da9b2c419b0d5a6af8403c0042d6a8e2dac9da2ff043f5b4db0dbe,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765498950624583205,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f311db17-37c6-43c5-ba25-ae0112ca44d1,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91690a6bd7af5ccdabfd2bf34a30f7f59579cefeb47f6693dc3195d13a15fcc7,PodSandboxId:91950b5d27b2be01dd77d423cc9afc29c52cac31187093386a58216968cf0e9b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765498936008097512,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc79ceef-e1e5-4e84-8e50-08cddb4210cc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016acc37605a4a31a23313851936a8eb7d0c3dec202b957b9d9f9cb94058316e,PodSandboxId:0d9ad533050d1af5f537da0af7a5a77fb452dedbf57f00176c77d62372605c72,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931833411206,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-smtsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61053f06-f67e-4c9c-aad9-822bceb0a15b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcc387937c5bd8a432b9d351a442d7c9f5a803cc0c13fea0cb29b824fcd3e4,PodSandboxId:a931fdbf96a96db2e1f0f9a59aaf78db847a0b15ed42c2dfcd12013c2a8eefa8,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931656978133,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-v2pcs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 502d0fce-f1bd-4bb8-a19c-3d14d8ff443b,},Annotations:map[string]string
{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb75eace1ae60bbd7be7eb094825319878dc47766e111d74d2197ca4f6047ff3,PodSandboxId:054c0c820f83954e48fa7f7c365b91638566e51e42d9a75507e91f1691cd2262,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765498914596861311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b07950-5b67-4c6a-b247-adfff0295856,},Annotations:map[string]string{io.kubernet
es.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2545d1c4104badfb2c899565277d86acde4fa84487cbc911a5eed7e620e5f1ca,PodSandboxId:ec87cf789aa67bad53b6953d01b63b0a7db9b73c69a78776052d7195d4ccb949,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765498914509303568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-p767c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaced64d-9231-4a1d-b5a5-2c53fad7adc2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa86415d95833529a5a348090e88d506d3fe176f4a55d0ce6754e16b491486ba,PodSandboxId:f95e015f6841c85a8def7e96739daebfcc57fcfd64a17e86e41172faab56c8b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,State:CONTAINER_RUNNING,CreatedAt:1765498914469492453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sb4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dceb9ad-8ffa-4cff-9e3e-42b76326accf,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cecfbdb350cefa94a1fe7105f86d02286dd03cbc95015d680495ed4626d0faea,PodSandboxId:4077a
acf17dd514391197da135051421f22e0c0c6864ab0cd0d1f5024582848e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765498914306389294,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bklgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508c8a96-ef4f-46ae-9595-e3d79f5aaad1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d6f457c9deec9b0a914fc5749fede48335396d3a12696a9d2196e8ee90272c,PodSandboxId:bbf68618157c84b52c5e847693f602924c5a50
2723c92cac87113ed52dd4c98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765498901170974533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741d1439a301fd6bce8381a66d4f311,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:682bcf3f5717ce12e9f08af35593b7951fe49e
81430c77c7696b229c2b256134,PodSandboxId:004aa8039d54675e34e1de4eb62d95775ec9f2df3ccebd71236681c77df6be29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765498901151918682,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80662db746c78d6588e618be59490c36,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11564ab7e066433a2abd8ab634601a55afdbcc5c42c04013655b2e5137957278,PodSandboxId:3f33764e6a00259a05922cc2d27f5fc67277b44d149af6769e412d2741c2f0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765498901155189584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676a46725761f088caa5b8e3aa507704,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3eaddaddeb434de6de5c05814e52089a9d858f295a9a189ee98cef9a02543,PodSandboxId:7f3d9c555b4e17a6a68d5e8d0ade8fdb0581ea8709021cedbf0f9b8a1f980c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765498901001446579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7045f88ec16042ef897bddab2ec31bb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{
\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13cad733-521f-42cd-8a56-198239f6b1d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.268902400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5dd3457-6a37-46b3-b090-33a767afa27c name=/runtime.v1.RuntimeService/Version
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.269019640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5dd3457-6a37-46b3-b090-33a767afa27c name=/runtime.v1.RuntimeService/Version
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.270473177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13fbf21c-beab-402e-afe7-5bfebd675d44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.271826198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499241271790865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242134,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13fbf21c-beab-402e-afe7-5bfebd675d44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.272960616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f01d79-0baa-4132-8bd2-b41c2c0a887b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.273018626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f01d79-0baa-4132-8bd2-b41c2c0a887b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:27:21 functional-843156 crio[6203]: time="2025-12-12 00:27:21.273373723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a11df105f6acf5596bd21cae8db921efa4c7cfba344af01b08b3cc5793ca0d8b,PodSandboxId:6bbc6cb9cb1b3d7fbd23959cccb394daab6a88b9e0d005358f2b4a75f0af15b1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499033093442111,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-krnrx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03389d4f-808d-4cd0-8294-bdb7818ea8cc,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc32ea7b305605c421da12463c4b3739e8ddd51d40c59b71d04f78601d63273,PodSandboxId:a56975d0e2da9b2c419b0d5a6af8403c0042d6a8e2dac9da2ff043f5b4db0dbe,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765498950624583205,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f311db17-37c6-43c5-ba25-ae0112ca44d1,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91690a6bd7af5ccdabfd2bf34a30f7f59579cefeb47f6693dc3195d13a15fcc7,PodSandboxId:91950b5d27b2be01dd77d423cc9afc29c52cac31187093386a58216968cf0e9b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765498936008097512,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc79ceef-e1e5-4e84-8e50-08cddb4210cc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016acc37605a4a31a23313851936a8eb7d0c3dec202b957b9d9f9cb94058316e,PodSandboxId:0d9ad533050d1af5f537da0af7a5a77fb452dedbf57f00176c77d62372605c72,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931833411206,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-smtsr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61053f06-f67e-4c9c-aad9-822bceb0a15b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1adcc387937c5bd8a432b9d351a442d7c9f5a803cc0c13fea0cb29b824fcd3e4,PodSandboxId:a931fdbf96a96db2e1f0f9a59aaf78db847a0b15ed42c2dfcd12013c2a8eefa8,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765498931656978133,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-v2pcs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 502d0fce-f1bd-4bb8-a19c-3d14d8ff443b,},Annotations:map[string]string
{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb75eace1ae60bbd7be7eb094825319878dc47766e111d74d2197ca4f6047ff3,PodSandboxId:054c0c820f83954e48fa7f7c365b91638566e51e42d9a75507e91f1691cd2262,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765498914596861311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b07950-5b67-4c6a-b247-adfff0295856,},Annotations:map[string]string{io.kubernet
es.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2545d1c4104badfb2c899565277d86acde4fa84487cbc911a5eed7e620e5f1ca,PodSandboxId:ec87cf789aa67bad53b6953d01b63b0a7db9b73c69a78776052d7195d4ccb949,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765498914509303568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-p767c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaced64d-9231-4a1d-b5a5-2c53fad7adc2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa86415d95833529a5a348090e88d506d3fe176f4a55d0ce6754e16b491486ba,PodSandboxId:f95e015f6841c85a8def7e96739daebfcc57fcfd64a17e86e41172faab56c8b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,State:CONTAINER_RUNNING,CreatedAt:1765498914469492453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sb4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dceb9ad-8ffa-4cff-9e3e-42b76326accf,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cecfbdb350cefa94a1fe7105f86d02286dd03cbc95015d680495ed4626d0faea,PodSandboxId:4077a
acf17dd514391197da135051421f22e0c0c6864ab0cd0d1f5024582848e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765498914306389294,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bklgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508c8a96-ef4f-46ae-9595-e3d79f5aaad1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d6f457c9deec9b0a914fc5749fede48335396d3a12696a9d2196e8ee90272c,PodSandboxId:bbf68618157c84b52c5e847693f602924c5a50
2723c92cac87113ed52dd4c98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765498901170974533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b741d1439a301fd6bce8381a66d4f311,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:682bcf3f5717ce12e9f08af35593b7951fe49e
81430c77c7696b229c2b256134,PodSandboxId:004aa8039d54675e34e1de4eb62d95775ec9f2df3ccebd71236681c77df6be29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765498901151918682,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80662db746c78d6588e618be59490c36,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11564ab7e066433a2abd8ab634601a55afdbcc5c42c04013655b2e5137957278,PodSandboxId:3f33764e6a00259a05922cc2d27f5fc67277b44d149af6769e412d2741c2f0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765498901155189584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 676a46725761f088caa5b8e3aa507704,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3eaddaddeb434de6de5c05814e52089a9d858f295a9a189ee98cef9a02543,PodSandboxId:7f3d9c555b4e17a6a68d5e8d0ade8fdb0581ea8709021cedbf0f9b8a1f980c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765498901001446579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-843156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7045f88ec16042ef897bddab2ec31bb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{
\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f01d79-0baa-4132-8bd2-b41c2c0a887b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a11df105f6acf       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   3 minutes ago       Running             mysql                     0                   6bbc6cb9cb1b3       mysql-6bcdcbc558-krnrx                      default
	3bc32ea7b3056       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                              4 minutes ago       Running             myfrontend                0                   a56975d0e2da9       sp-pod                                      default
	91690a6bd7af5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           5 minutes ago       Exited              mount-munger              0                   91950b5d27b2b       busybox-mount                               default
	016acc37605a4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6         5 minutes ago       Running             echo-server               0                   0d9ad533050d1       hello-node-75c85bcc94-smtsr                 default
	1adcc387937c5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6         5 minutes ago       Running             echo-server               0                   a931fdbf96a96       hello-node-connect-7d85dfc575-v2pcs         default
	eb75eace1ae60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              5 minutes ago       Running             storage-provisioner       0                   054c0c820f839       storage-provisioner                         kube-system
	2545d1c4104ba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              5 minutes ago       Running             coredns                   0                   ec87cf789aa67       coredns-66bc5c9577-p767c                    kube-system
	fa86415d95833       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              5 minutes ago       Running             coredns                   0                   f95e015f6841c       coredns-66bc5c9577-sb4z4                    kube-system
	cecfbdb350cef       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                              5 minutes ago       Running             kube-proxy                0                   4077aacf17dd5       kube-proxy-bklgr                            kube-system
	36d6f457c9dee       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              5 minutes ago       Running             etcd                      3                   bbf68618157c8       etcd-functional-843156                      kube-system
	11564ab7e0664       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                              5 minutes ago       Running             kube-apiserver            5                   3f33764e6a002       kube-apiserver-functional-843156            kube-system
	682bcf3f5717c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                              5 minutes ago       Running             kube-controller-manager   7                   004aa8039d546       kube-controller-manager-functional-843156   kube-system
	a7c3eaddaddeb       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                              5 minutes ago       Running             kube-scheduler            3                   7f3d9c555b4e1       kube-scheduler-functional-843156            kube-system
	
	
	==> coredns [2545d1c4104badfb2c899565277d86acde4fa84487cbc911a5eed7e620e5f1ca] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [fa86415d95833529a5a348090e88d506d3fe176f4a55d0ce6754e16b491486ba] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               functional-843156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-843156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=functional-843156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_21_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:21:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-843156
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:27:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:24:20 +0000   Fri, 12 Dec 2025 00:21:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:24:20 +0000   Fri, 12 Dec 2025 00:21:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:24:20 +0000   Fri, 12 Dec 2025 00:21:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:24:20 +0000   Fri, 12 Dec 2025 00:21:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    functional-843156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 f98c1f86839e4eb9a15d8d4d010a6574
	  System UUID:                f98c1f86-839e-4eb9-a15d-8d4d010a6574
	  Boot ID:                    de109716-1b25-43d9-aa34-8ac37140a310
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-smtsr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     hello-node-connect-7d85dfc575-v2pcs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     mysql-6bcdcbc558-krnrx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m57s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 coredns-66bc5c9577-p767c                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m28s
	  kube-system                 coredns-66bc5c9577-sb4z4                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m28s
	  kube-system                 etcd-functional-843156                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m34s
	  kube-system                 kube-apiserver-functional-843156              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-controller-manager-functional-843156     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-bklgr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-functional-843156              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-96cgz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hr9d7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m26s  kube-proxy       
	  Normal  Starting                 5m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m34s  kubelet          Node functional-843156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s  kubelet          Node functional-843156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s  kubelet          Node functional-843156 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m29s  node-controller  Node functional-843156 event: Registered Node functional-843156 in Controller
	
	
	==> dmesg <==
	[ +20.890422] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.598351] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:15] kauditd_printk_skb: 350 callbacks suppressed
	[  +0.000174] kauditd_printk_skb: 95 callbacks suppressed
	[ +12.587468] kauditd_printk_skb: 98 callbacks suppressed
	[ +14.329357] kauditd_printk_skb: 12 callbacks suppressed
	[Dec12 00:17] kauditd_printk_skb: 209 callbacks suppressed
	[  +0.483758] kauditd_printk_skb: 170 callbacks suppressed
	[ +21.868038] kauditd_printk_skb: 60 callbacks suppressed
	[Dec12 00:18] kauditd_printk_skb: 5 callbacks suppressed
	[ +25.986568] kauditd_printk_skb: 21 callbacks suppressed
	[Dec12 00:19] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.289109] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:20] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.600248] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 00:21] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.607902] kauditd_printk_skb: 110 callbacks suppressed
	[  +0.000070] kauditd_printk_skb: 13 callbacks suppressed
	[Dec12 00:22] kauditd_printk_skb: 170 callbacks suppressed
	[  +0.240883] kauditd_printk_skb: 148 callbacks suppressed
	[  +2.606473] kauditd_printk_skb: 89 callbacks suppressed
	[  +4.102472] kauditd_printk_skb: 61 callbacks suppressed
	[  +3.975765] crun[17396]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.607110] kauditd_printk_skb: 131 callbacks suppressed
	[Dec12 00:23] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [36d6f457c9deec9b0a914fc5749fede48335396d3a12696a9d2196e8ee90272c] <==
	{"level":"info","ts":"2025-12-12T00:23:43.921909Z","caller":"traceutil/trace.go:172","msg":"trace[549432625] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"170.1007ms","start":"2025-12-12T00:23:43.751787Z","end":"2025-12-12T00:23:43.921888Z","steps":["trace[549432625] 'process raft request'  (duration: 169.97241ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:23:46.201166Z","caller":"traceutil/trace.go:172","msg":"trace[1987872749] linearizableReadLoop","detail":"{readStateIndex:739; appliedIndex:739; }","duration":"219.2752ms","start":"2025-12-12T00:23:45.981834Z","end":"2025-12-12T00:23:46.201109Z","steps":["trace[1987872749] 'read index received'  (duration: 219.267187ms)","trace[1987872749] 'applied index is now lower than readState.Index'  (duration: 7.344µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:23:46.201330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"219.440038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:23:46.201370Z","caller":"traceutil/trace.go:172","msg":"trace[1607769341] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:700; }","duration":"219.533801ms","start":"2025-12-12T00:23:45.981829Z","end":"2025-12-12T00:23:46.201363Z","steps":["trace[1607769341] 'agreement among raft nodes before linearized reading'  (duration: 219.408822ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:23:46.204327Z","caller":"traceutil/trace.go:172","msg":"trace[1019684290] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"265.623128ms","start":"2025-12-12T00:23:45.936971Z","end":"2025-12-12T00:23:46.202594Z","steps":["trace[1019684290] 'process raft request'  (duration: 265.32909ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:23:47.538100Z","caller":"traceutil/trace.go:172","msg":"trace[705771112] linearizableReadLoop","detail":"{readStateIndex:742; appliedIndex:742; }","duration":"172.121759ms","start":"2025-12-12T00:23:47.365958Z","end":"2025-12-12T00:23:47.538080Z","steps":["trace[705771112] 'read index received'  (duration: 172.116771ms)","trace[705771112] 'applied index is now lower than readState.Index'  (duration: 4.294µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T00:23:47.538667Z","caller":"traceutil/trace.go:172","msg":"trace[1507566451] transaction","detail":"{read_only:false; response_revision:703; number_of_response:1; }","duration":"388.028862ms","start":"2025-12-12T00:23:47.150628Z","end":"2025-12-12T00:23:47.538657Z","steps":["trace[1507566451] 'process raft request'  (duration: 387.879822ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:23:47.540211Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T00:23:47.150609Z","time spent":"389.120792ms","remote":"127.0.0.1:53730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":681,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-t6wakzollyp7wvwghqas6kac2i\" mod_revision:688 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-t6wakzollyp7wvwghqas6kac2i\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-t6wakzollyp7wvwghqas6kac2i\" > >"}
	{"level":"warn","ts":"2025-12-12T00:23:47.538882Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.905997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:23:47.541749Z","caller":"traceutil/trace.go:172","msg":"trace[1701130788] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:703; }","duration":"175.782003ms","start":"2025-12-12T00:23:47.365954Z","end":"2025-12-12T00:23:47.541736Z","steps":["trace[1701130788] 'agreement among raft nodes before linearized reading'  (duration: 172.890031ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:23:49.528599Z","caller":"traceutil/trace.go:172","msg":"trace[2104750220] linearizableReadLoop","detail":"{readStateIndex:744; appliedIndex:744; }","duration":"160.091026ms","start":"2025-12-12T00:23:49.368490Z","end":"2025-12-12T00:23:49.528581Z","steps":["trace[2104750220] 'read index received'  (duration: 160.085337ms)","trace[2104750220] 'applied index is now lower than readState.Index'  (duration: 4.979µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T00:23:49.528927Z","caller":"traceutil/trace.go:172","msg":"trace[157263968] transaction","detail":"{read_only:false; response_revision:705; number_of_response:1; }","duration":"206.547036ms","start":"2025-12-12T00:23:49.322368Z","end":"2025-12-12T00:23:49.528915Z","steps":["trace[157263968] 'process raft request'  (duration: 206.262313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:23:49.528997Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.485792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:23:49.529025Z","caller":"traceutil/trace.go:172","msg":"trace[1047204675] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:705; }","duration":"160.535445ms","start":"2025-12-12T00:23:49.368483Z","end":"2025-12-12T00:23:49.529018Z","steps":["trace[1047204675] 'agreement among raft nodes before linearized reading'  (duration: 160.419647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:23:49.542648Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.629967ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:23:49.542710Z","caller":"traceutil/trace.go:172","msg":"trace[1056716936] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:705; }","duration":"100.703155ms","start":"2025-12-12T00:23:49.441997Z","end":"2025-12-12T00:23:49.542701Z","steps":["trace[1056716936] 'agreement among raft nodes before linearized reading'  (duration: 100.605421ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:23:49.542976Z","caller":"traceutil/trace.go:172","msg":"trace[269030416] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"215.643978ms","start":"2025-12-12T00:23:49.327322Z","end":"2025-12-12T00:23:49.542966Z","steps":["trace[269030416] 'process raft request'  (duration: 215.459427ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:23:52.852872Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"407.554166ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-12T00:23:52.853088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"380.961348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:23:52.853207Z","caller":"traceutil/trace.go:172","msg":"trace[1910509439] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:709; }","duration":"381.474988ms","start":"2025-12-12T00:23:52.471713Z","end":"2025-12-12T00:23:52.853188Z","steps":["trace[1910509439] 'range keys from in-memory index tree'  (duration: 380.890652ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:23:52.853280Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T00:23:52.471694Z","time spent":"381.576643ms","remote":"127.0.0.1:53366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":27,"request content":"key:\"/registry/resourcequotas\" limit:1 "}
	{"level":"info","ts":"2025-12-12T00:23:52.853101Z","caller":"traceutil/trace.go:172","msg":"trace[1417545651] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:709; }","duration":"408.839802ms","start":"2025-12-12T00:23:52.444241Z","end":"2025-12-12T00:23:52.853080Z","steps":["trace[1417545651] 'range keys from in-memory index tree'  (duration: 407.455585ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:23:52.853999Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.329821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:23:52.855485Z","caller":"traceutil/trace.go:172","msg":"trace[2087108227] range","detail":"{range_begin:/registry/persistentvolumeclaims; range_end:; response_count:0; response_revision:709; }","duration":"231.046065ms","start":"2025-12-12T00:23:52.624373Z","end":"2025-12-12T00:23:52.855420Z","steps":["trace[2087108227] 'range keys from in-memory index tree'  (duration: 229.177571ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:23:54.410785Z","caller":"traceutil/trace.go:172","msg":"trace[948349713] transaction","detail":"{read_only:false; response_revision:718; number_of_response:1; }","duration":"104.373232ms","start":"2025-12-12T00:23:54.306398Z","end":"2025-12-12T00:23:54.410772Z","steps":["trace[948349713] 'process raft request'  (duration: 104.276401ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:27:21 up 13 min,  0 users,  load average: 0.28, 0.55, 0.45
	Linux functional-843156 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [11564ab7e066433a2abd8ab634601a55afdbcc5c42c04013655b2e5137957278] <==
	I1212 00:21:46.545732       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:21:46.562522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.201]
	I1212 00:21:46.564246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:21:46.571326       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:21:47.337504       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:21:47.374280       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:21:47.414576       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:21:47.454822       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:21:53.088810       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:21:53.240658       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:21:53.356522       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:21:53.380654       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 00:22:05.800806       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.119.138"}
	I1212 00:22:10.274672       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.119.124"}
	I1212 00:22:10.362098       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.230.20"}
	I1212 00:22:22.869515       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.128.172"}
	I1212 00:22:22.910841       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.191.36"}
	I1212 00:22:24.273766       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.23.140"}
	E1212 00:22:29.073838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:33024: use of closed network connection
	E1212 00:22:37.154867       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:59944: use of closed network connection
	E1212 00:24:00.637963       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:35688: use of closed network connection
	E1212 00:24:01.684897       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:35712: use of closed network connection
	E1212 00:24:03.081554       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:35726: use of closed network connection
	E1212 00:24:04.506541       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:35738: use of closed network connection
	E1212 00:24:07.089265       1 conn.go:339] Error on socket receive: read tcp 192.168.39.201:8441->192.168.39.1:35752: use of closed network connection
	
	
	==> kube-controller-manager [682bcf3f5717ce12e9f08af35593b7951fe49e81430c77c7696b229c2b256134] <==
	I1212 00:21:52.426451       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 00:21:52.426797       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 00:21:52.426881       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-843156"
	I1212 00:21:52.426928       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 00:21:52.428227       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 00:21:52.429211       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 00:21:52.429315       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 00:21:52.430416       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 00:21:52.430692       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 00:21:52.430731       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 00:21:52.430760       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 00:21:52.431715       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 00:21:52.432886       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 00:21:52.433061       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 00:21:52.433583       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 00:21:52.435858       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 00:21:52.433640       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 00:21:52.433651       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 00:21:52.441586       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 00:21:52.444715       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 00:21:52.447191       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E1212 00:22:22.474792       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:22:22.564683       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:22:22.591381       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:22:22.597611       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [cecfbdb350cefa94a1fe7105f86d02286dd03cbc95015d680495ed4626d0faea] <==
	I1212 00:21:54.989215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:21:55.089802       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:21:55.089908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.201"]
	E1212 00:21:55.089997       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:21:55.132044       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 00:21:55.132098       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 00:21:55.132179       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:21:55.143868       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:21:55.144811       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:21:55.144904       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:21:55.147259       1 config.go:200] "Starting service config controller"
	I1212 00:21:55.147296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:21:55.147312       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:21:55.147315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:21:55.147324       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:21:55.147327       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:21:55.150203       1 config.go:309] "Starting node config controller"
	I1212 00:21:55.150299       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:21:55.150334       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:21:55.248460       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:21:55.248630       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:21:55.248642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a7c3eaddaddeb434de6de5c05814e52089a9d858f295a9a189ee98cef9a02543] <==
	E1212 00:21:44.433277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:21:44.433430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:21:44.434454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:21:44.434588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:21:44.434730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:21:44.435371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:21:44.435692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:21:44.435766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:21:45.363309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:21:45.389467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:21:45.394537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:21:45.406424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:21:45.478256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:21:45.533802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:21:45.533802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:21:45.539756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:21:45.626533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:21:45.638703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 00:21:45.677304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:21:45.735055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:21:45.821717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:21:45.835411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 00:21:45.847487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 00:21:45.867487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1212 00:21:48.796597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:26:07 functional-843156 kubelet[14133]: E1212 00:26:07.537695   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499167537315993 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:07 functional-843156 kubelet[14133]: E1212 00:26:07.537717   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499167537315993 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:09 functional-843156 kubelet[14133]: E1212 00:26:09.308432   14133 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-96cgz" podUID="0f019d4f-1e5d-495d-be72-ebd1633ea69a"
	Dec 12 00:26:17 functional-843156 kubelet[14133]: E1212 00:26:17.540201   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499177539544047 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:17 functional-843156 kubelet[14133]: E1212 00:26:17.540233   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499177539544047 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:20 functional-843156 kubelet[14133]: E1212 00:26:20.312584   14133 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-96cgz" podUID="0f019d4f-1e5d-495d-be72-ebd1633ea69a"
	Dec 12 00:26:27 functional-843156 kubelet[14133]: E1212 00:26:27.543465   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499187542932009 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:27 functional-843156 kubelet[14133]: E1212 00:26:27.544262   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499187542932009 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:31 functional-843156 kubelet[14133]: E1212 00:26:31.308492   14133 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-96cgz" podUID="0f019d4f-1e5d-495d-be72-ebd1633ea69a"
	Dec 12 00:26:37 functional-843156 kubelet[14133]: E1212 00:26:37.546855   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499197546026069 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:37 functional-843156 kubelet[14133]: E1212 00:26:37.546902   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499197546026069 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:47 functional-843156 kubelet[14133]: E1212 00:26:47.550939   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499207549237674 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:47 functional-843156 kubelet[14133]: E1212 00:26:47.550985   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499207549237674 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:53 functional-843156 kubelet[14133]: E1212 00:26:53.719090   14133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 12 00:26:53 functional-843156 kubelet[14133]: E1212 00:26:53.719213   14133 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 12 00:26:53 functional-843156 kubelet[14133]: E1212 00:26:53.719420   14133 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-hr9d7_kubernetes-dashboard(843c1a1c-422b-4be7-8604-1935006d8cb1): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:26:53 functional-843156 kubelet[14133]: E1212 00:26:53.719460   14133 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hr9d7" podUID="843c1a1c-422b-4be7-8604-1935006d8cb1"
	Dec 12 00:26:57 functional-843156 kubelet[14133]: E1212 00:26:57.554852   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499217553362111 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:26:57 functional-843156 kubelet[14133]: E1212 00:26:57.554951   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499217553362111 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:27:05 functional-843156 kubelet[14133]: E1212 00:27:05.309570   14133 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hr9d7" podUID="843c1a1c-422b-4be7-8604-1935006d8cb1"
	Dec 12 00:27:07 functional-843156 kubelet[14133]: E1212 00:27:07.556648   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499227555987390 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:27:07 functional-843156 kubelet[14133]: E1212 00:27:07.557723   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499227555987390 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:27:17 functional-843156 kubelet[14133]: E1212 00:27:17.560555   14133 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499237559941937 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:27:17 functional-843156 kubelet[14133]: E1212 00:27:17.560599   14133 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499237559941937 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:242134} inodes_used:{value:106}}"
	Dec 12 00:27:19 functional-843156 kubelet[14133]: E1212 00:27:19.310503   14133 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hr9d7" podUID="843c1a1c-422b-4be7-8604-1935006d8cb1"
	
	
	==> storage-provisioner [eb75eace1ae60bbd7be7eb094825319878dc47766e111d74d2197ca4f6047ff3] <==
	W1212 00:26:57.684808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:26:59.688858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:26:59.698237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:01.701459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:01.706977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:03.711209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:03.721206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:05.725963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:05.732905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:07.737381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:07.746899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:09.750857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:09.756875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:11.761338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:11.766710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:13.769819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:13.780246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:15.783930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:15.789852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:17.792869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:17.802055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:19.805885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:19.811900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:21.816283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:27:21.825720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-843156 -n functional-843156
helpers_test.go:270: (dbg) Run:  kubectl --context functional-843156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-96cgz kubernetes-dashboard-855c9754f9-hr9d7
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-843156 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-96cgz kubernetes-dashboard-855c9754f9-hr9d7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-843156 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-96cgz kubernetes-dashboard-855c9754f9-hr9d7: exit status 1 (74.891367ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-843156/192.168.39.201
	Start Time:       Fri, 12 Dec 2025 00:22:13 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  cri-o://91690a6bd7af5ccdabfd2bf34a30f7f59579cefeb47f6693dc3195d13a15fcc7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 12 Dec 2025 00:22:16 +0000
	      Finished:     Fri, 12 Dec 2025 00:22:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5c7l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n5c7l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m9s  default-scheduler  Successfully assigned default/busybox-mount to functional-843156
	  Normal  Pulling    5m9s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m7s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.264s (2.264s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m6s  kubelet            Created container: mount-munger
	  Normal  Started    5m6s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-96cgz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hr9d7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-843156 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-96cgz kubernetes-dashboard-855c9754f9-hr9d7: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-582645 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-582645 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-582645 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-582645 --alsologtostderr -v=1] stderr:
I1212 00:31:41.297919  207204 out.go:360] Setting OutFile to fd 1 ...
I1212 00:31:41.298064  207204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:41.298073  207204 out.go:374] Setting ErrFile to fd 2...
I1212 00:31:41.298077  207204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:41.298276  207204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:31:41.298562  207204 mustload.go:66] Loading cluster: functional-582645
I1212 00:31:41.298946  207204 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:41.300979  207204 host.go:66] Checking if "functional-582645" exists ...
I1212 00:31:41.301186  207204 api_server.go:166] Checking apiserver status ...
I1212 00:31:41.301233  207204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:31:41.303677  207204 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:41.304104  207204 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:41.304131  207204 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:41.304267  207204 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:41.401151  207204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7192/cgroup
W1212 00:31:41.416036  207204 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7192/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 00:31:41.416156  207204 ssh_runner.go:195] Run: ls
I1212 00:31:41.423171  207204 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
I1212 00:31:41.429065  207204 api_server.go:279] https://192.168.39.189:8441/healthz returned 200:
ok
W1212 00:31:41.429127  207204 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1212 00:31:41.429302  207204 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:41.429318  207204 addons.go:70] Setting dashboard=true in profile "functional-582645"
I1212 00:31:41.429325  207204 addons.go:239] Setting addon dashboard=true in "functional-582645"
I1212 00:31:41.429365  207204 host.go:66] Checking if "functional-582645" exists ...
I1212 00:31:41.433097  207204 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1212 00:31:41.434493  207204 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1212 00:31:41.435671  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1212 00:31:41.435712  207204 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1212 00:31:41.439294  207204 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:41.439822  207204 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:41.439881  207204 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:41.440075  207204 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:41.540250  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1212 00:31:41.540284  207204 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1212 00:31:41.570971  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1212 00:31:41.571002  207204 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1212 00:31:41.594146  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1212 00:31:41.594177  207204 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1212 00:31:41.621920  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1212 00:31:41.621950  207204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1212 00:31:41.648640  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1212 00:31:41.648680  207204 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1212 00:31:41.676418  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1212 00:31:41.676449  207204 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1212 00:31:41.703543  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1212 00:31:41.703572  207204 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1212 00:31:41.728618  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1212 00:31:41.728654  207204 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1212 00:31:41.754047  207204 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:31:41.754080  207204 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1212 00:31:41.780629  207204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:31:42.577545  207204 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-582645 addons enable metrics-server

                                                
                                                
I1212 00:31:42.578926  207204 addons.go:202] Writing out "functional-582645" config to set dashboard=true...
W1212 00:31:42.579223  207204 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1212 00:31:42.579908  207204 kapi.go:59] client config for functional-582645: &rest.Config{Host:"https://192.168.39.189:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.key", CAFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1212 00:31:42.580390  207204 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1212 00:31:42.580406  207204 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1212 00:31:42.580410  207204 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1212 00:31:42.580415  207204 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1212 00:31:42.580418  207204 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1212 00:31:42.593024  207204 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  38639fe8-03b2-40c0-9e52-11ea7958b7b2 859 0 2025-12-12 00:31:42 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-12 00:31:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.250.89,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.250.89],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1212 00:31:42.593189  207204 out.go:285] * Launching proxy ...
* Launching proxy ...
I1212 00:31:42.593265  207204 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-582645 proxy --port 36195]
I1212 00:31:42.593679  207204 dashboard.go:159] Waiting for kubectl to output host:port ...
I1212 00:31:42.645080  207204 out.go:203] 
W1212 00:31:42.646352  207204 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1212 00:31:42.646377  207204 out.go:285] * 
* 
W1212 00:31:42.652868  207204 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 00:31:42.654310  207204 out.go:203] 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-582645 -n functional-582645
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 logs -n 25: (1.507031862s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-582645 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │                     │
	│ mount     │ -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1887540447/001:/mount-9p --alsologtostderr -v=1              │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │                     │
	│ ssh       │ functional-582645 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh       │ functional-582645 ssh -- ls -la /mount-9p                                                                                                           │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh       │ functional-582645 ssh cat /mount-9p/test-1765499435163221214                                                                                        │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:30 UTC │ 12 Dec 25 00:30 UTC │
	│ ssh       │ functional-582645 ssh stat /mount-9p/created-by-test                                                                                                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh sudo umount -f /mount-9p                                                                                                      │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ mount     │ -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1839879261/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ ssh       │ functional-582645 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh -- ls -la /mount-9p                                                                                                           │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh sudo umount -f /mount-9p                                                                                                      │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ mount     │ -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount2 --alsologtostderr -v=1                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ ssh       │ functional-582645 ssh findmnt -T /mount1                                                                                                            │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ mount     │ -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount3 --alsologtostderr -v=1                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ mount     │ -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount1 --alsologtostderr -v=1                │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ ssh       │ functional-582645 ssh findmnt -T /mount1                                                                                                            │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh findmnt -T /mount2                                                                                                            │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh       │ functional-582645 ssh findmnt -T /mount3                                                                                                            │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ mount     │ -p functional-582645 --kill=true                                                                                                                    │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ start     │ -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ start     │ -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ start     │ -p functional-582645 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                   │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-582645 --alsologtostderr -v=1                                                                                      │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:31:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:31:41.183948  207172 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:31:41.184213  207172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:41.184223  207172 out.go:374] Setting ErrFile to fd 2...
	I1212 00:31:41.184227  207172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:41.184477  207172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:31:41.185012  207172 out.go:368] Setting JSON to false
	I1212 00:31:41.185873  207172 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":22445,"bootTime":1765477056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:31:41.185926  207172 start.go:143] virtualization: kvm guest
	I1212 00:31:41.187624  207172 out.go:179] * [functional-582645] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:31:41.188924  207172 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:31:41.188934  207172 notify.go:221] Checking for updates...
	I1212 00:31:41.190990  207172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:31:41.192269  207172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:31:41.193613  207172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:31:41.194714  207172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:31:41.195764  207172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:31:41.197191  207172 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:31:41.197682  207172 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:31:41.230637  207172 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 00:31:41.231816  207172 start.go:309] selected driver: kvm2
	I1212 00:31:41.231826  207172 start.go:927] validating driver "kvm2" against &{Name:functional-582645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-582645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:41.231932  207172 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:31:41.232862  207172 cni.go:84] Creating CNI manager for ""
	I1212 00:31:41.232936  207172 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:31:41.232987  207172 start.go:353] cluster config:
	{Name:functional-582645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-582645 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:41.234403  207172 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.573809873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499503573785223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194440,},InodesUsed:&UInt64Value{Value:83,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b74f73a0-b0ff-4513-9ee0-e85fb484cf06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.574825527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bebbe551-0fd5-4a7e-bd08-61ac2fa30a35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.574882036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bebbe551-0fd5-4a7e-bd08-61ac2fa30a35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.575724643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765499403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernet
es.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765499400277965851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.p
od.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNI
NG,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e285
0085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attemp
t:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSan
dboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\
"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kuber
netes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bebbe551-0fd5-4a7e-bd08-61ac2fa30a35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.622712114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d59e6d64-e22b-4c9e-baae-ad62818bf15c name=/runtime.v1.RuntimeService/Version
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.622803123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d59e6d64-e22b-4c9e-baae-ad62818bf15c name=/runtime.v1.RuntimeService/Version
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.625005346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=919c08a0-c492-4731-ba3f-603b83ff44a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.625801423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499503625776189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194440,},InodesUsed:&UInt64Value{Value:83,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=919c08a0-c492-4731-ba3f-603b83ff44a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.627180018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74e2b716-211b-4363-a739-79c0bcbccc47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.627259677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74e2b716-211b-4363-a739-79c0bcbccc47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.627595093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765499403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernet
es.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765499400277965851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.p
od.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNI
NG,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e285
0085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attemp
t:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSan
dboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\
"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kuber
netes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74e2b716-211b-4363-a739-79c0bcbccc47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.672910017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99e51bb2-7c78-4fcd-9f64-4f677aa93183 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.672990633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99e51bb2-7c78-4fcd-9f64-4f677aa93183 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.674781307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c588c38d-c1a4-4251-8a54-29f7be0b15da name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.675442934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499503675418914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194440,},InodesUsed:&UInt64Value{Value:83,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c588c38d-c1a4-4251-8a54-29f7be0b15da name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.676588784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=228e6b2a-dfc3-4777-94b7-298a7b9ddc45 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.676731636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=228e6b2a-dfc3-4777-94b7-298a7b9ddc45 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.677185221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765499403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernet
es.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765499400277965851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.p
od.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNI
NG,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e285
0085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attemp
t:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSan
dboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\
"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kuber
netes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=228e6b2a-dfc3-4777-94b7-298a7b9ddc45 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.727729899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab20812e-4197-4224-9571-04a9fc24bd5a name=/runtime.v1.RuntimeService/Version
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.728306839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab20812e-4197-4224-9571-04a9fc24bd5a name=/runtime.v1.RuntimeService/Version
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.730680567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=225efa64-198c-4d59-8414-172be6bf0f6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.731544456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765499503731518103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194440,},InodesUsed:&UInt64Value{Value:83,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=225efa64-198c-4d59-8414-172be6bf0f6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.733263669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=669f512c-95bb-44d1-9c79-a3e68941fc84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.733469537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=669f512c-95bb-44d1-9c79-a3e68941fc84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:31:43 functional-582645 crio[6047]: time="2025-12-12 00:31:43.734315599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765499403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernet
es.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765499400277965851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.p
od.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNI
NG,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e285
0085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attemp
t:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSan
dboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\
"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kuber
netes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=669f512c-95bb-44d1-9c79-a3e68941fc84 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c113b3c52f3d8       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                      3 seconds ago        Running             myfrontend                0                   74b3c61d42265       sp-pod                                      default
	00f2965903131       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   adc4a18f7a854       busybox-mount                               default
	b9037b8c6e0fd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      About a minute ago   Running             coredns                   3                   f1a1902cc1de4       coredns-7d764666f9-ww2mp                    kube-system
	5bd3a1916e05c       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      About a minute ago   Running             kube-proxy                4                   a652617fe1f84       kube-proxy-cpcsm                            kube-system
	dfe4a0733242f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   5d4fdc5860729       storage-provisioner                         kube-system
	65abec4fb9674       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      About a minute ago   Running             kube-controller-manager   4                   4e3d4571397c7       kube-controller-manager-functional-582645   kube-system
	ff0a796b0a3ad       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      About a minute ago   Running             kube-apiserver            0                   13194d0880bc4       kube-apiserver-functional-582645            kube-system
	5c8dc4e95a84a       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      2 minutes ago        Created             kube-proxy                3                   a652617fe1f84       kube-proxy-cpcsm                            kube-system
	3e99f55ab1b5c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      2 minutes ago        Running             etcd                      3                   98015c403f351       etcd-functional-582645                      kube-system
	1e9ab6a45ca7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   5d4fdc5860729       storage-provisioner                         kube-system
	6184aa047ea58       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      2 minutes ago        Running             kube-scheduler            3                   2b9fa45b0cc74       kube-scheduler-functional-582645            kube-system
	6ff93ec24b4c3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      2 minutes ago        Exited              kube-controller-manager   3                   4e3d4571397c7       kube-controller-manager-functional-582645   kube-system
	883270426c1ae       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      2 minutes ago        Exited              coredns                   2                   956f7af726b79       coredns-7d764666f9-ww2mp                    kube-system
	bc047dec8fa6b       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      2 minutes ago        Exited              kube-scheduler            2                   f165cce104f5b       kube-scheduler-functional-582645            kube-system
	bc85ff6f54fd3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      2 minutes ago        Exited              etcd                      2                   a81223598dd29       etcd-functional-582645                      kube-system
	
	
	==> coredns [883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:39797 - 29675 "HINFO IN 3914855379281731377.3141388153415285341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044280429s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46861 - 58619 "HINFO IN 5502039683480278376.787475168878439647. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.025734264s
	
	
	==> describe nodes <==
	Name:               functional-582645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-582645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=functional-582645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_27_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:27:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-582645
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:31:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:30:42 +0000   Fri, 12 Dec 2025 00:27:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:30:42 +0000   Fri, 12 Dec 2025 00:27:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:30:42 +0000   Fri, 12 Dec 2025 00:27:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:30:42 +0000   Fri, 12 Dec 2025 00:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    functional-582645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd2834266c3148d1919fc4d6d8c86b75
	  System UUID:                dd283426-6c31-48d1-919f-c4d6d8c86b75
	  Boot ID:                    76932dbb-dfdc-4ecf-a151-f63db4239adb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-4j55r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  default                     hello-node-connect-9f67c86d4-8cddf            0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 coredns-7d764666f9-ww2mp                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m40s
	  kube-system                 etcd-functional-582645                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m46s
	  kube-system                 kube-apiserver-functional-582645              250m (12%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-functional-582645     200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 kube-proxy-cpcsm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 kube-scheduler-functional-582645              100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-k5xx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6hdv4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  3m42s  node-controller  Node functional-582645 event: Registered Node functional-582645 in Controller
	  Normal  RegisteredNode  2m43s  node-controller  Node functional-582645 event: Registered Node functional-582645 in Controller
	  Normal  RegisteredNode  99s    node-controller  Node functional-582645 event: Registered Node functional-582645 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000058] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005610] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.209112] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086095] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100463] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.153016] kauditd_printk_skb: 171 callbacks suppressed
	[Dec12 00:28] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.036671] kauditd_printk_skb: 236 callbacks suppressed
	[ +22.976464] kauditd_printk_skb: 39 callbacks suppressed
	[  +0.134953] kauditd_printk_skb: 499 callbacks suppressed
	[  +5.193393] kauditd_printk_skb: 126 callbacks suppressed
	[Dec12 00:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.114734] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.598850] kauditd_printk_skb: 78 callbacks suppressed
	[ +20.701032] kauditd_printk_skb: 285 callbacks suppressed
	[Dec12 00:30] kauditd_printk_skb: 85 callbacks suppressed
	[ +17.679919] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.282501] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 74 callbacks suppressed
	[Dec12 00:31] kauditd_printk_skb: 63 callbacks suppressed
	[  +0.019327] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.998659] kauditd_printk_skb: 69 callbacks suppressed
	
	
	==> etcd [3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04] <==
	{"level":"warn","ts":"2025-12-12T00:30:00.942505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:00.955597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:00.967659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:00.982884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:00.995907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.013259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.027398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.051660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.060411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.074861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.078582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.087824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.098341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.109854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.118704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.130516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.139144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.149658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.158158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.168603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.180628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.193692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.200874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.210448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:30:01.296407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48382","server-name":"","error":"EOF"}
	
	
	==> etcd [bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf] <==
	{"level":"warn","ts":"2025-12-12T00:28:57.450613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.462179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.479654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.486013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.504265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.519819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.568649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43106","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T00:29:21.122004Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T00:29:21.122053Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-582645","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	{"level":"error","ts":"2025-12-12T00:29:21.122416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T00:29:21.262178Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T00:29:21.263857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.263911Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"6fb28b9aae66857a"}
	{"level":"warn","ts":"2025-12-12T00:29:21.263916Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-12T00:29:21.263990Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-12T00:29:21.264003Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T00:29:21.264013Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.264017Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T00:29:21.264065Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T00:29:21.264122Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T00:29:21.264149Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.189:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.268166Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"error","ts":"2025-12-12T00:29:21.268250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.189:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.268275Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2025-12-12T00:29:21.268281Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-582645","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	
	==> kernel <==
	 00:31:44 up 4 min,  0 users,  load average: 1.28, 0.68, 0.30
	Linux functional-582645 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d] <==
	I1212 00:30:02.131610       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 00:30:02.131771       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 00:30:02.132540       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 00:30:02.132697       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:30:02.138248       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1212 00:30:02.150674       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 00:30:02.154776       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 00:30:02.168316       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 00:30:02.173939       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:30:02.836366       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:30:03.046514       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:30:03.798461       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:30:03.850314       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:30:03.882811       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:30:03.894127       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:30:05.530734       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:30:05.630510       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:30:23.204290       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.21.170"}
	I1212 00:30:28.592483       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:30:28.766925       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.200.130"}
	I1212 00:30:29.143617       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.159.139"}
	E1212 00:31:38.387048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:58130: use of closed network connection
	I1212 00:31:42.202372       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:31:42.536826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.250.89"}
	I1212 00:31:42.561334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.89.97"}
	
	
	==> kube-controller-manager [65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce] <==
	I1212 00:30:05.282707       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.280338       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282718       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282723       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282728       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282732       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282738       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282743       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282713       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.280323       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282700       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.296675       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:30:05.324181       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.383401       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.383434       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:30:05.383440       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 00:30:05.399283       1 shared_informer.go:377] "Caches are synced"
	E1212 00:31:42.327513       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.337995       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.339714       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.347532       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.351546       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.363974       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.364000       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.375131       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829] <==
	I1212 00:29:38.027721       1 serving.go:386] Generated self-signed cert in-memory
	I1212 00:29:38.063805       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1212 00:29:38.068179       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:29:38.071923       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 00:29:38.072251       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 00:29:38.072318       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 00:29:38.072334       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:29:59.377257       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.189:8441/healthz\": dial tcp 192.168.39.189:8441: connect: connection refused"
	
	
	==> kube-proxy [5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be] <==
	I1212 00:30:03.781889       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:30:03.886002       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:03.886234       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.189"]
	E1212 00:30:03.886447       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:30:03.952877       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 00:30:03.953174       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 00:30:03.953212       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:30:03.977980       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:30:03.978444       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:30:03.978472       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:30:03.999781       1 config.go:200] "Starting service config controller"
	I1212 00:30:04.000152       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:30:04.000574       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:30:04.001887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:30:04.002014       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:30:04.002020       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:30:04.073340       1 config.go:309] "Starting node config controller"
	I1212 00:30:04.073393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:30:04.073402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:30:04.104533       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:30:04.104769       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:30:04.104800       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d] <==
	
	
	==> kube-scheduler [6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45] <==
	E1212 00:30:02.008282       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:30:02.009419       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 00:30:02.011270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:30:02.011362       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:30:02.011418       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1212 00:30:02.061223       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1212 00:30:02.061355       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1212 00:30:02.065392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1212 00:30:02.066247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1212 00:30:02.066613       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1212 00:30:02.066711       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1212 00:30:02.066787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1212 00:30:02.066859       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1212 00:30:02.066943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1212 00:30:02.067000       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1212 00:30:02.067070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1212 00:30:02.067846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1212 00:30:02.068457       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1212 00:30:02.068580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1212 00:30:02.068649       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1212 00:30:02.068718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1212 00:30:02.068847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1212 00:30:02.068934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1212 00:30:02.068472       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1212 00:30:03.990533       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59] <==
	I1212 00:28:58.283931       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:28:58.284419       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:28:58.284484       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:28:58.321421       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326389       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326471       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326516       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:28:58.326543       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326572       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326603       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326632       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326660       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1212 00:28:58.326684       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326708       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326738       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1212 00:28:58.326798       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326827       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1212 00:28:58.326857       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326877       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326942       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	I1212 00:28:58.384698       1 shared_informer.go:377] "Caches are synced"
	I1212 00:29:21.165005       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 00:29:21.165051       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 00:29:21.165065       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 00:29:21.168740       1 server.go:265] "[graceful-termination] secure server is exiting"
	
	
	==> kubelet <==
	Dec 12 00:31:38 functional-582645 kubelet[7000]: E1212 00:31:38.387460    7000 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58442->127.0.0.1:41889: write tcp 127.0.0.1:58442->127.0.0.1:41889: write: broken pipe
	Dec 12 00:31:38 functional-582645 kubelet[7000]: I1212 00:31:38.787514    7000 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/95ded433-3fae-49a7-a341-fcae48ef5706-pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5\" (UniqueName: \"kubernetes.io/host-path/95ded433-3fae-49a7-a341-fcae48ef5706-pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5\") pod \"95ded433-3fae-49a7-a341-fcae48ef5706\" (UID: \"95ded433-3fae-49a7-a341-fcae48ef5706\") "
	Dec 12 00:31:38 functional-582645 kubelet[7000]: I1212 00:31:38.787588    7000 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/95ded433-3fae-49a7-a341-fcae48ef5706-kube-api-access-glprc\" (UniqueName: \"kubernetes.io/projected/95ded433-3fae-49a7-a341-fcae48ef5706-kube-api-access-glprc\") pod \"95ded433-3fae-49a7-a341-fcae48ef5706\" (UID: \"95ded433-3fae-49a7-a341-fcae48ef5706\") "
	Dec 12 00:31:38 functional-582645 kubelet[7000]: I1212 00:31:38.787678    7000 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ded433-3fae-49a7-a341-fcae48ef5706-pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5" pod "95ded433-3fae-49a7-a341-fcae48ef5706" (UID: "95ded433-3fae-49a7-a341-fcae48ef5706"). InnerVolumeSpecName "pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 12 00:31:38 functional-582645 kubelet[7000]: I1212 00:31:38.791017    7000 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ded433-3fae-49a7-a341-fcae48ef5706-kube-api-access-glprc" pod "95ded433-3fae-49a7-a341-fcae48ef5706" (UID: "95ded433-3fae-49a7-a341-fcae48ef5706"). InnerVolumeSpecName "kube-api-access-glprc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 00:31:38 functional-582645 kubelet[7000]: I1212 00:31:38.889052    7000 reconciler_common.go:299] "Volume detached for volume \"pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5\" (UniqueName: \"kubernetes.io/host-path/95ded433-3fae-49a7-a341-fcae48ef5706-pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5\") on node \"functional-582645\" DevicePath \"\""
	Dec 12 00:31:38 functional-582645 kubelet[7000]: I1212 00:31:38.889275    7000 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-glprc\" (UniqueName: \"kubernetes.io/projected/95ded433-3fae-49a7-a341-fcae48ef5706-kube-api-access-glprc\") on node \"functional-582645\" DevicePath \"\""
	Dec 12 00:31:39 functional-582645 kubelet[7000]: I1212 00:31:39.025976    7000 scope.go:122] "RemoveContainer" containerID="de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.124830    7000 manager.go:1119] Failed to create existing container: /kubepods/burstable/podaf19c393-547d-45e8-a67d-56ded826444b/crio-956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945: Error finding container 956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945: Status 404 returned error can't find the container with id 956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.125511    7000 manager.go:1119] Failed to create existing container: /kubepods/burstable/podc47db1197b7f7aa415dfbf4aa2326354/crio-f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe: Error finding container f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe: Status 404 returned error can't find the container with id f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.128199    7000 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod0a63b02a42adbcf4ac7f47ff294bd886/crio-a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994: Error finding container a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994: Status 404 returned error can't find the container with id a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994
	Dec 12 00:31:39 functional-582645 kubelet[7000]: I1212 00:31:39.184228    7000 scope.go:122] "RemoveContainer" containerID="de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.184523    7000 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499499181265085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194440}  inodes_used:{value:83}}"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.184547    7000 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499499181265085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194440}  inodes_used:{value:83}}"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.197421    7000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343\": container with ID starting with de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343 not found: ID does not exist" containerID="de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: I1212 00:31:39.197765    7000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343"} err="failed to get container status \"de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343\": rpc error: code = NotFound desc = could not find container \"de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343\": container with ID starting with de4ae5ad3ad1daa86cc5496498de4f495647547a555ccb27f37c0252ac450343 not found: ID does not exist"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: I1212 00:31:39.302833    7000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjfck\" (UniqueName: \"kubernetes.io/projected/2fab52d3-dfee-4229-9795-73be192e28a5-kube-api-access-pjfck\") pod \"sp-pod\" (UID: \"2fab52d3-dfee-4229-9795-73be192e28a5\") " pod="default/sp-pod"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: I1212 00:31:39.302898    7000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5\" (UniqueName: \"kubernetes.io/host-path/2fab52d3-dfee-4229-9795-73be192e28a5-pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5\") pod \"sp-pod\" (UID: \"2fab52d3-dfee-4229-9795-73be192e28a5\") " pod="default/sp-pod"
	Dec 12 00:31:39 functional-582645 kubelet[7000]: E1212 00:31:39.955356    7000 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-582645" containerName="kube-scheduler"
	Dec 12 00:31:40 functional-582645 kubelet[7000]: I1212 00:31:40.960904    7000 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="95ded433-3fae-49a7-a341-fcae48ef5706" path="/var/lib/kubelet/pods/95ded433-3fae-49a7-a341-fcae48ef5706/volumes"
	Dec 12 00:31:42 functional-582645 kubelet[7000]: I1212 00:31:42.407456    7000 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.407442089 podStartE2EDuration="3.407442089s" podCreationTimestamp="2025-12-12 00:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:31:40.054883172 +0000 UTC m=+121.281919438" watchObservedRunningTime="2025-12-12 00:31:42.407442089 +0000 UTC m=+123.634478353"
	Dec 12 00:31:42 functional-582645 kubelet[7000]: I1212 00:31:42.528630    7000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wms\" (UniqueName: \"kubernetes.io/projected/8b0ed4a1-5dbe-43e8-b613-430d3e3514df-kube-api-access-56wms\") pod \"kubernetes-dashboard-b84665fb8-6hdv4\" (UID: \"8b0ed4a1-5dbe-43e8-b613-430d3e3514df\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4"
	Dec 12 00:31:42 functional-582645 kubelet[7000]: I1212 00:31:42.529023    7000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/16fb483d-856f-4e93-a7a6-a7af3ab45688-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-k5xx5\" (UID: \"16fb483d-856f-4e93-a7a6-a7af3ab45688\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-k5xx5"
	Dec 12 00:31:42 functional-582645 kubelet[7000]: I1212 00:31:42.529055    7000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66mdf\" (UniqueName: \"kubernetes.io/projected/16fb483d-856f-4e93-a7a6-a7af3ab45688-kube-api-access-66mdf\") pod \"dashboard-metrics-scraper-5565989548-k5xx5\" (UID: \"16fb483d-856f-4e93-a7a6-a7af3ab45688\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-k5xx5"
	Dec 12 00:31:42 functional-582645 kubelet[7000]: I1212 00:31:42.529144    7000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8b0ed4a1-5dbe-43e8-b613-430d3e3514df-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-6hdv4\" (UID: \"8b0ed4a1-5dbe-43e8-b613-430d3e3514df\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4"
	
	
	==> storage-provisioner [1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7] <==
	I1212 00:29:37.563740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:29:37.569698       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e] <==
	W1212 00:31:19.418792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:21.423409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:21.431964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:23.436669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:23.449486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:25.452994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:25.459165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:27.463308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:27.473554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:29.477942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:29.483807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:31.487906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:31.497476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:33.502137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:33.507686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:35.511161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:35.519686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:37.523641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:37.529706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:39.533379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:39.543190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:41.548825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:41.557430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:43.565542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:31:43.571422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-582645 -n functional-582645
helpers_test.go:270: (dbg) Run:  kubectl --context functional-582645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-582645 describe pod busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-582645 describe pod busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4: exit status 1 (102.359989ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-582645/192.168.39.189
	Start Time:       Fri, 12 Dec 2025 00:30:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 12 Dec 2025 00:31:34 +0000
	      Finished:     Fri, 12 Dec 2025 00:31:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft9ml (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ft9ml:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  69s   default-scheduler  Successfully assigned default/busybox-mount to functional-582645
	  Normal  Pulling    68s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.148s (57.323s including waiting). Image size: 4631262 bytes.
	  Normal  Created    11s   kubelet            Container created
	  Normal  Started    11s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-4j55r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-582645/192.168.39.189
	Start Time:       Fri, 12 Dec 2025 00:30:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cflqj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cflqj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  76s               default-scheduler  Successfully assigned default/hello-node-5758569b79-4j55r to functional-582645
	  Warning  Failed     16s               kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     16s               kubelet            Error: ErrImagePull
	  Normal   BackOff    15s               kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x2 over 76s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-8cddf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-582645/192.168.39.189
	Start Time:       Fri, 12 Dec 2025 00:30:28 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6mhqr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6mhqr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  77s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-8cddf to functional-582645
	  Warning  Failed     46s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     46s                kubelet            Error: ErrImagePull
	  Normal   BackOff    45s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     45s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    31s (x2 over 76s)  kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-k5xx5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6hdv4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-582645 describe pod busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-582645 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-582645 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-8cddf" [447df605-a8b4-4c5a-8e24-744096296393] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-582645 -n functional-582645
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-12 00:40:29.078497069 +0000 UTC m=+2693.788130055
functional_test.go:1645: (dbg) Run:  kubectl --context functional-582645 describe po hello-node-connect-9f67c86d4-8cddf -n default
functional_test.go:1645: (dbg) kubectl --context functional-582645 describe po hello-node-connect-9f67c86d4-8cddf -n default:
Name:             hello-node-connect-9f67c86d4-8cddf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-582645/192.168.39.189
Start Time:       Fri, 12 Dec 2025 00:30:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6mhqr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6mhqr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-8cddf to functional-582645
Warning  Failed     9m30s                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m23s (x2 over 7m55s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    65s (x11 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     65s (x11 over 9m29s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    50s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     8s (x5 over 9m30s)     kubelet            Error: ErrImagePull
Warning  Failed     8s (x2 over 5m24s)     kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
functional_test.go:1645: (dbg) Run:  kubectl --context functional-582645 logs hello-node-connect-9f67c86d4-8cddf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-582645 logs hello-node-connect-9f67c86d4-8cddf -n default: exit status 1 (105.480146ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-8cddf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-582645 logs hello-node-connect-9f67c86d4-8cddf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-582645 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-8cddf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-582645/192.168.39.189
Start Time:       Fri, 12 Dec 2025 00:30:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6mhqr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6mhqr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-8cddf to functional-582645
Warning  Failed     9m30s                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m23s (x2 over 7m55s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    65s (x11 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     65s (x11 over 9m29s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    50s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     8s (x5 over 9m30s)     kubelet            Error: ErrImagePull
Warning  Failed     8s (x2 over 5m24s)     kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-582645 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-582645 logs -l app=hello-node-connect: exit status 1 (70.69408ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-8cddf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-582645 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-582645 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.200.130
IPs:                      10.99.200.130
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31860/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-582645 -n functional-582645
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 logs -n 25: (1.647081329s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-582645 ssh findmnt -T /mount3                                                                                                    │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ mount          │ -p functional-582645 --kill=true                                                                                                            │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ start          │ -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ start          │ -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ start          │ -p functional-582645 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-582645 --alsologtostderr -v=1                                                                              │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ ssh            │ functional-582645 ssh sudo cat /etc/ssl/certs/190272.pem                                                                                    │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh sudo cat /usr/share/ca-certificates/190272.pem                                                                        │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh sudo cat /etc/test/nested/copy/190272/hosts                                                                           │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                    │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ license        │                                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh sudo cat /etc/ssl/certs/1902722.pem                                                                                   │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh sudo cat /usr/share/ca-certificates/1902722.pem                                                                       │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                    │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ image          │ functional-582645 image ls --format short --alsologtostderr                                                                                 │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ image          │ functional-582645 image ls --format yaml --alsologtostderr                                                                                  │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ ssh            │ functional-582645 ssh pgrep buildkitd                                                                                                       │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │                     │
	│ image          │ functional-582645 image build -t localhost/my-image:functional-582645 testdata/build --alsologtostderr                                      │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ image          │ functional-582645 image ls                                                                                                                  │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ image          │ functional-582645 image ls --format json --alsologtostderr                                                                                  │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ image          │ functional-582645 image ls --format table --alsologtostderr                                                                                 │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ update-context │ functional-582645 update-context --alsologtostderr -v=2                                                                                     │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ update-context │ functional-582645 update-context --alsologtostderr -v=2                                                                                     │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ update-context │ functional-582645 update-context --alsologtostderr -v=2                                                                                     │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:31 UTC │ 12 Dec 25 00:31 UTC │
	│ service        │ functional-582645 service list                                                                                                              │ functional-582645 │ jenkins │ v1.37.0 │ 12 Dec 25 00:40 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:31:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:31:41.183948  207172 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:31:41.184213  207172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:41.184223  207172 out.go:374] Setting ErrFile to fd 2...
	I1212 00:31:41.184227  207172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:41.184477  207172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:31:41.185012  207172 out.go:368] Setting JSON to false
	I1212 00:31:41.185873  207172 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":22445,"bootTime":1765477056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:31:41.185926  207172 start.go:143] virtualization: kvm guest
	I1212 00:31:41.187624  207172 out.go:179] * [functional-582645] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:31:41.188924  207172 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:31:41.188934  207172 notify.go:221] Checking for updates...
	I1212 00:31:41.190990  207172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:31:41.192269  207172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:31:41.193613  207172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:31:41.194714  207172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:31:41.195764  207172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:31:41.197191  207172 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:31:41.197682  207172 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:31:41.230637  207172 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 00:31:41.231816  207172 start.go:309] selected driver: kvm2
	I1212 00:31:41.231826  207172 start.go:927] validating driver "kvm2" against &{Name:functional-582645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-582645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:41.231932  207172 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:31:41.232862  207172 cni.go:84] Creating CNI manager for ""
	I1212 00:31:41.232936  207172 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:31:41.232987  207172 start.go:353] cluster config:
	{Name:functional-582645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-582645 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:41.234403  207172 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.281150117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d78eba08-3e05-470e-9980-a95f23b85506 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.284153330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecbe53cf-184e-4b2c-bffd-99e9fbb7d602 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.284860126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765500030284832205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240715,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecbe53cf-184e-4b2c-bffd-99e9fbb7d602 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.285809919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5171d78b-c163-4c71-bbf3-539225c40fc2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.285886461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5171d78b-c163-4c71-bbf3-539225c40fc2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.286254220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c52d4014a99c64545df9eb4c13aeda4d26d5e3f9827aa6a8082583df179e4dd,PodSandboxId:16024e46e0bc204e6d1a1e7a880614e93ffa50c7ff990de0b5361ad0b5302f4a,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499675769046222,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-h5j57,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a4b87bdb-0d23-42fa-a5dd-74911a6e1c31,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176549
9403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:17654994002779
65851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887
194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageS
pec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e76
57d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSandboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"na
me\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.ku
bernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5171d78b-c163-4c71-bbf3-539225c40fc2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.309042498Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ef836d2c-d800-4f06-b571-e64d584dd00b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.309452569Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:16024e46e0bc204e6d1a1e7a880614e93ffa50c7ff990de0b5361ad0b5302f4a,Metadata:&PodSandboxMetadata{Name:mysql-7d7b65bc95-h5j57,Uid:a4b87bdb-0d23-42fa-a5dd-74911a6e1c31,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499506339433035,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-7d7b65bc95-h5j57,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a4b87bdb-0d23-42fa-a5dd-74911a6e1c31,pod-template-hash: 7d7b65bc95,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:31:46.018655789Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da464ef264dcc3c600797e89c616ec1007cd6a3b0e978332c504a7bfa2e0a7cf,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-b84665fb8-6hdv4,Uid:8b0ed4a1-5dbe-43e8-b613-430d3e3514df,Namespace:kubernetes-da
shboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499502789780646,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-6hdv4,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8b0ed4a1-5dbe-43e8-b613-430d3e3514df,k8s-app: kubernetes-dashboard,pod-template-hash: b84665fb8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:31:42.420780898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:924ca0d1074babe54cb1439d68fbf9181e6e77cd906924a2fcd46c7635910d44,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-5565989548-k5xx5,Uid:16fb483d-856f-4e93-a7a6-a7af3ab45688,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499502727667981,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-k5xx5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 16fb4
83d-856f-4e93-a7a6-a7af3ab45688,k8s-app: dashboard-metrics-scraper,pod-template-hash: 5565989548,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:31:42.407555194Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:2fab52d3-dfee-4229-9795-73be192e28a5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499499550242557,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\
":[{\"image\":\"public.ecr.aws/nginx/nginx:alpine\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-12-12T00:31:39.228727118Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:76636df6-0816-48f2-bff8-d27f5aa9f041,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765499436928644173,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:30:36.604187468Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a43d0b43cd1a3de79ba8c21b548c27773068b67
a64cfba20404f570dfce501ae,Metadata:&PodSandboxMetadata{Name:hello-node-5758569b79-4j55r,Uid:9632c02d-df40-4bcb-a4df-5859bd3fb7db,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499429417600628,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-5758569b79-4j55r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632c02d-df40-4bcb-a4df-5859bd3fb7db,pod-template-hash: 5758569b79,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:30:29.060448696Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d21ae4de4af857bd7ef8f156aabdc060ebde8d12426d12fa79347670cf935890,Metadata:&PodSandboxMetadata{Name:hello-node-connect-9f67c86d4-8cddf,Uid:447df605-a8b4-4c5a-8e24-744096296393,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499429041871949,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-8cddf,io.kub
ernetes.pod.namespace: default,io.kubernetes.pod.uid: 447df605-a8b4-4c5a-8e24-744096296393,pod-template-hash: 9f67c86d4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:30:28.703903991Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-582645,Uid:381b06ac47bd32667bb7072ea44ee355,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765499399695007591,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.189:8441,kubernetes.io/config.hash: 381b06ac47bd32667bb7072ea44ee355,kubernetes.io/config.seen: 2025-12-12T00:29:38.898609917Z
,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-ww2mp,Uid:af19c393-547d-45e8-a67d-56ded826444b,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765499376794741244,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:28:58.562372375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&PodSandboxMetadata{Name:kube-proxy-cpcsm,Uid:c635e6a4-fc17-4048-8b61-da08f73f6b50,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765499376452555556,Labels:map[string]string{controller-revision-hash
: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:28:58.562368004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e0d404c6-fa24-4c4f-91b0-edce445b5ce0,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765499376359181113,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v
1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-12T00:28:58.562371067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&PodSandboxMetadata{Name:etcd-functional-582645,Uid:0a63b02a42adbcf4ac7f47ff294bd886,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765499376246986793,Labels:map[s
tring]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.189:2379,kubernetes.io/config.hash: 0a63b02a42adbcf4ac7f47ff294bd886,kubernetes.io/config.seen: 2025-12-12T00:28:54.553713466Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-582645,Uid:ce4c2b37ffec990d01d527309d7be5d2,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765499376180847421,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec99
0d01d527309d7be5d2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce4c2b37ffec990d01d527309d7be5d2,kubernetes.io/config.seen: 2025-12-12T00:28:54.553719179Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-582645,Uid:c47db1197b7f7aa415dfbf4aa2326354,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765499376176203197,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c47db1197b7f7aa415dfbf4aa2326354,kubernetes.io/config.seen: 2025-12-12T00:28:54.553719943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:956f7af726b79df8e4df76cda982ab921c1
4fbb31e89fd08812a863224b4b945,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-ww2mp,Uid:af19c393-547d-45e8-a67d-56ded826444b,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765499327484327229,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-12T00:28:04.186531030Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&PodSandboxMetadata{Name:etcd-functional-582645,Uid:0a63b02a42adbcf4ac7f47ff294bd886,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765499327274043238,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.189:2379,kubernetes.io/config.hash: 0a63b02a42adbcf4ac7f47ff294bd886,kubernetes.io/config.seen: 2025-12-12T00:27:57.919571249Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-582645,Uid:c47db1197b7f7aa415dfbf4aa2326354,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765499327206221981,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c47db1197b7f7aa415dfbf4aa2326354,kubernetes.io/config.seen:
2025-12-12T00:27:57.919570308Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ef836d2c-d800-4f06-b571-e64d584dd00b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.311236827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d0d0572-63f7-4af5-a997-3044e568b58d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.311378499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d0d0572-63f7-4af5-a997-3044e568b58d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.312366737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c52d4014a99c64545df9eb4c13aeda4d26d5e3f9827aa6a8082583df179e4dd,PodSandboxId:16024e46e0bc204e6d1a1e7a880614e93ffa50c7ff990de0b5361ad0b5302f4a,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499675769046222,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-h5j57,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a4b87bdb-0d23-42fa-a5dd-74911a6e1c31,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176549
9403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:17654994002779
65851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887
194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageS
pec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e76
57d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSandboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"na
me\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.ku
bernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d0d0572-63f7-4af5-a997-3044e568b58d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.325839581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6a0013c-21a9-43a5-9ac0-0a24ebf326a5 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.325916430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6a0013c-21a9-43a5-9ac0-0a24ebf326a5 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.327387443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd022858-001e-4a82-a7d0-f1951a2dbe2d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.328432911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765500030328177551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240715,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd022858-001e-4a82-a7d0-f1951a2dbe2d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.333878776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7847b49-02e4-43ec-8058-567bb0080772 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.334383647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7847b49-02e4-43ec-8058-567bb0080772 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.334796762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c52d4014a99c64545df9eb4c13aeda4d26d5e3f9827aa6a8082583df179e4dd,PodSandboxId:16024e46e0bc204e6d1a1e7a880614e93ffa50c7ff990de0b5361ad0b5302f4a,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499675769046222,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-h5j57,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a4b87bdb-0d23-42fa-a5dd-74911a6e1c31,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176549
9403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:17654994002779
65851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887
194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageS
pec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e76
57d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSandboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"na
me\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.ku
bernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7847b49-02e4-43ec-8058-567bb0080772 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.380551706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9756d7b3-03c7-4aed-99d4-d53a2a9895e9 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.380623269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9756d7b3-03c7-4aed-99d4-d53a2a9895e9 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.381792302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=291cba5c-3ba9-4ab5-be54-2e3953b855d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.382522321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765500030382497704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240715,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=291cba5c-3ba9-4ab5-be54-2e3953b855d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.383603471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79fdd790-28a3-43f6-a538-ab6a400c701c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.383675352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79fdd790-28a3-43f6-a538-ab6a400c701c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:40:30 functional-582645 crio[6047]: time="2025-12-12 00:40:30.383999316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c52d4014a99c64545df9eb4c13aeda4d26d5e3f9827aa6a8082583df179e4dd,PodSandboxId:16024e46e0bc204e6d1a1e7a880614e93ffa50c7ff990de0b5361ad0b5302f4a,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765499675769046222,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-h5j57,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a4b87bdb-0d23-42fa-a5dd-74911a6e1c31,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c113b3c52f3d8eb17ca20065af6aedfc226e876426fe24a4ea33e33a3ea8bdb0,PodSandboxId:74b3c61d42265646b55cb25ddcda474c43071f4c4dc56b001c7304f622b77b21,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765499499865940720,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fab52d3-dfee-4229-9795-73be192e28a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313,PodSandboxId:adc4a18f7a854f275bac696c262023c499ce44aa66142e36bbd0b17a5d7b6504,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765499494499572383,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76636df6-0816-48f2-bff8-d27f5aa9f041,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765499403286698761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35,PodSandboxId:f1a1902cc1de428200573e2bc60a40fba3b7a3ec449ffae7052a70c599a7cab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765499403326051331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176549
9403268966942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce,PodSandboxId:4e3d4571397c70e5fcb32d995e7657d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:17654994002779
65851,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d,PodSandboxId:13194d0880bc441fdf43ba3fea8c3df849f29982cbd02f1a93dba67e9782c96c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765499399823706855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b06ac47bd32667bb7072ea44ee355,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d,PodSandboxId:a652617fe1f84263b8fea90677cf0dfef8f815e9c4f9760427d46a7d36efb19c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887
194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_CREATED,CreatedAt:1765499377136537689,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c635e6a4-fc17-4048-8b61-da08f73f6b50,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04,PodSandboxId:98015c403f3513b2fa5752f068d60dbd5d9cf9fa839d1b2e454f08b0833d8e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765499377071811172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7,PodSandboxId:5d4fdc5860729eeff1291f24ed3c80adb25df93eb7a37baded751983f6d9d0ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765499377047129165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d404c6-fa24-4c4f-91b0-edce445b5ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45,PodSandboxId:2b9fa45b0cc74c02b2ca14c19c8ea3172e838640ed17fb6ac9aeb37b323a7072,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageS
pec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765499376880003217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829,PodSandboxId:4e3d4571397c70e5fcb32d995e76
57d34ea403d0f6c6f599b54ab8562aecb445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765499376819064668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce4c2b37ffec990d01d527309d7be5d2,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10,PodSandboxId:956f7af726b79df8e4df76cda982ab921c14fbb31e89fd08812a863224b4b945,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765499338908332143,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ww2mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af19c393-547d-45e8-a67d-56ded826444b,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"na
me\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59,PodSandboxId:f165cce104f5b01dad5eebed623e1fcf76977b0fe9ae423291bd92340770f4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765499335278744850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-582645,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c47db1197b7f7aa415dfbf4aa2326354,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf,PodSandboxId:a81223598dd2975079eba8b06da9850c64dd109188013fe7ef6728fc26807994,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765499335259527879,Labels:map[string]string{io.kubernetes.container.name: etcd,io.ku
bernetes.pod.name: etcd-functional-582645,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a63b02a42adbcf4ac7f47ff294bd886,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79fdd790-28a3-43f6-a538-ab6a400c701c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2c52d4014a99c       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   5 minutes ago       Running             mysql                     0                   16024e46e0bc2       mysql-7d7b65bc95-h5j57                      default
	c113b3c52f3d8       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                              8 minutes ago       Running             myfrontend                0                   74b3c61d42265       sp-pod                                      default
	00f2965903131       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 minutes ago       Exited              mount-munger              0                   adc4a18f7a854       busybox-mount                               default
	b9037b8c6e0fd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago      Running             coredns                   3                   f1a1902cc1de4       coredns-7d764666f9-ww2mp                    kube-system
	5bd3a1916e05c       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                              10 minutes ago      Running             kube-proxy                4                   a652617fe1f84       kube-proxy-cpcsm                            kube-system
	dfe4a0733242f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       4                   5d4fdc5860729       storage-provisioner                         kube-system
	65abec4fb9674       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                              10 minutes ago      Running             kube-controller-manager   4                   4e3d4571397c7       kube-controller-manager-functional-582645   kube-system
	ff0a796b0a3ad       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                              10 minutes ago      Running             kube-apiserver            0                   13194d0880bc4       kube-apiserver-functional-582645            kube-system
	5c8dc4e95a84a       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                              10 minutes ago      Created             kube-proxy                3                   a652617fe1f84       kube-proxy-cpcsm                            kube-system
	3e99f55ab1b5c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              10 minutes ago      Running             etcd                      3                   98015c403f351       etcd-functional-582645                      kube-system
	1e9ab6a45ca7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Exited              storage-provisioner       3                   5d4fdc5860729       storage-provisioner                         kube-system
	6184aa047ea58       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                              10 minutes ago      Running             kube-scheduler            3                   2b9fa45b0cc74       kube-scheduler-functional-582645            kube-system
	6ff93ec24b4c3       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                              10 minutes ago      Exited              kube-controller-manager   3                   4e3d4571397c7       kube-controller-manager-functional-582645   kube-system
	883270426c1ae       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              11 minutes ago      Exited              coredns                   2                   956f7af726b79       coredns-7d764666f9-ww2mp                    kube-system
	bc047dec8fa6b       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                              11 minutes ago      Exited              kube-scheduler            2                   f165cce104f5b       kube-scheduler-functional-582645            kube-system
	bc85ff6f54fd3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              11 minutes ago      Exited              etcd                      2                   a81223598dd29       etcd-functional-582645                      kube-system
	
	
	==> coredns [883270426c1aeb617815fdd5d549cfe44188f69f05dfbd720be8cb33559e1f10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:39797 - 29675 "HINFO IN 3914855379281731377.3141388153415285341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044280429s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9037b8c6e0fdde79e1caa2de7541de8ca32f46057ca9a49f198a39479d95f35] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46861 - 58619 "HINFO IN 5502039683480278376.787475168878439647. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.025734264s
	
	
	==> describe nodes <==
	Name:               functional-582645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-582645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=functional-582645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_27_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:27:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-582645
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:40:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:39:53 +0000   Fri, 12 Dec 2025 00:27:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:39:53 +0000   Fri, 12 Dec 2025 00:27:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:39:53 +0000   Fri, 12 Dec 2025 00:27:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:39:53 +0000   Fri, 12 Dec 2025 00:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    functional-582645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd2834266c3148d1919fc4d6d8c86b75
	  System UUID:                dd283426-6c31-48d1-919f-c4d6d8c86b75
	  Boot ID:                    76932dbb-dfdc-4ecf-a151-f63db4239adb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-4j55r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-8cddf            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-7d7b65bc95-h5j57                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    8m45s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 coredns-7d764666f9-ww2mp                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-582645                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-582645              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-582645     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cpcsm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-582645              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-k5xx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6hdv4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-582645 event: Registered Node functional-582645 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-582645 event: Registered Node functional-582645 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-582645 event: Registered Node functional-582645 in Controller
	
	
	==> dmesg <==
	[  +0.005610] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.209112] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086095] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100463] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.153016] kauditd_printk_skb: 171 callbacks suppressed
	[Dec12 00:28] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.036671] kauditd_printk_skb: 236 callbacks suppressed
	[ +22.976464] kauditd_printk_skb: 39 callbacks suppressed
	[  +0.134953] kauditd_printk_skb: 499 callbacks suppressed
	[  +5.193393] kauditd_printk_skb: 126 callbacks suppressed
	[Dec12 00:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.114734] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.598850] kauditd_printk_skb: 78 callbacks suppressed
	[ +20.701032] kauditd_printk_skb: 285 callbacks suppressed
	[Dec12 00:30] kauditd_printk_skb: 85 callbacks suppressed
	[ +17.679919] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.282501] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 74 callbacks suppressed
	[Dec12 00:31] kauditd_printk_skb: 63 callbacks suppressed
	[  +0.019327] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.998659] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.519868] crun[10756]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.004608] kauditd_printk_skb: 122 callbacks suppressed
	
	
	==> etcd [3e99f55ab1b5c72e6308cadd0638e62d22646246ba0cd61bf203866724a75b04] <==
	{"level":"warn","ts":"2025-12-12T00:34:27.517292Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T00:34:27.102460Z","time spent":"409.033835ms","remote":"127.0.0.1:47774","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":681,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-kqey2atblag7qtzvxzzv5w5m3e\" mod_revision:1036 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-kqey2atblag7qtzvxzzv5w5m3e\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-kqey2atblag7qtzvxzzv5w5m3e\" > >"}
	{"level":"info","ts":"2025-12-12T00:34:31.014977Z","caller":"traceutil/trace.go:172","msg":"trace[251651673] linearizableReadLoop","detail":"{readStateIndex:1182; appliedIndex:1182; }","duration":"226.690247ms","start":"2025-12-12T00:34:30.788269Z","end":"2025-12-12T00:34:31.014959Z","steps":["trace[251651673] 'read index received'  (duration: 226.682456ms)","trace[251651673] 'applied index is now lower than readState.Index'  (duration: 7µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:34:31.015190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.905572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:34:31.015233Z","caller":"traceutil/trace.go:172","msg":"trace[1706419621] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1053; }","duration":"226.960801ms","start":"2025-12-12T00:34:30.788266Z","end":"2025-12-12T00:34:31.015226Z","steps":["trace[1706419621] 'agreement among raft nodes before linearized reading'  (duration: 226.879341ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:34:31.016889Z","caller":"traceutil/trace.go:172","msg":"trace[1402947908] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"274.619419ms","start":"2025-12-12T00:34:30.742259Z","end":"2025-12-12T00:34:31.016878Z","steps":["trace[1402947908] 'process raft request'  (duration: 272.72296ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:34:33.442070Z","caller":"traceutil/trace.go:172","msg":"trace[336734460] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1185; }","duration":"295.44417ms","start":"2025-12-12T00:34:33.146609Z","end":"2025-12-12T00:34:33.442053Z","steps":["trace[336734460] 'read index received'  (duration: 295.437304ms)","trace[336734460] 'applied index is now lower than readState.Index'  (duration: 6.252µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:34:33.442247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.624482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:34:33.442267Z","caller":"traceutil/trace.go:172","msg":"trace[122616064] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1056; }","duration":"295.657493ms","start":"2025-12-12T00:34:33.146604Z","end":"2025-12-12T00:34:33.442261Z","steps":["trace[122616064] 'agreement among raft nodes before linearized reading'  (duration: 295.599787ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:34:33.443283Z","caller":"traceutil/trace.go:172","msg":"trace[1498258740] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"410.919154ms","start":"2025-12-12T00:34:33.032350Z","end":"2025-12-12T00:34:33.443269Z","steps":["trace[1498258740] 'process raft request'  (duration: 409.769805ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:34:33.443374Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T00:34:33.032332Z","time spent":"410.99919ms","remote":"127.0.0.1:47592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1054 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-12T00:34:37.715363Z","caller":"traceutil/trace.go:172","msg":"trace[1261244251] linearizableReadLoop","detail":"{readStateIndex:1198; appliedIndex:1198; }","duration":"162.014168ms","start":"2025-12-12T00:34:37.553333Z","end":"2025-12-12T00:34:37.715347Z","steps":["trace[1261244251] 'read index received'  (duration: 162.009418ms)","trace[1261244251] 'applied index is now lower than readState.Index'  (duration: 3.945µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T00:34:37.715528Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.181315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-12-12T00:34:37.715547Z","caller":"traceutil/trace.go:172","msg":"trace[1421288133] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1067; }","duration":"162.21549ms","start":"2025-12-12T00:34:37.553327Z","end":"2025-12-12T00:34:37.715543Z","steps":["trace[1421288133] 'agreement among raft nodes before linearized reading'  (duration: 162.091566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:34:37.716074Z","caller":"traceutil/trace.go:172","msg":"trace[1719813250] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"180.931264ms","start":"2025-12-12T00:34:37.535134Z","end":"2025-12-12T00:34:37.716066Z","steps":["trace[1719813250] 'process raft request'  (duration: 180.854383ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T00:34:42.201828Z","caller":"traceutil/trace.go:172","msg":"trace[836659364] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1203; }","duration":"412.739667ms","start":"2025-12-12T00:34:41.789073Z","end":"2025-12-12T00:34:42.201813Z","steps":["trace[836659364] 'read index received'  (duration: 412.734866ms)","trace[836659364] 'applied index is now lower than readState.Index'  (duration: 3.992µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T00:34:42.202464Z","caller":"traceutil/trace.go:172","msg":"trace[623517163] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"435.183733ms","start":"2025-12-12T00:34:41.767269Z","end":"2025-12-12T00:34:42.202452Z","steps":["trace[623517163] 'process raft request'  (duration: 435.051202ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:34:42.202999Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T00:34:41.767249Z","time spent":"435.705992ms","remote":"127.0.0.1:47592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1071 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-12T00:34:42.203622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.655917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:34:42.203981Z","caller":"traceutil/trace.go:172","msg":"trace[766840200] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"225.017569ms","start":"2025-12-12T00:34:41.978955Z","end":"2025-12-12T00:34:42.203972Z","steps":["trace[766840200] 'agreement among raft nodes before linearized reading'  (duration: 224.638964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:34:42.202655Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"413.686873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T00:34:42.205629Z","caller":"traceutil/trace.go:172","msg":"trace[1117999605] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"416.666266ms","start":"2025-12-12T00:34:41.788954Z","end":"2025-12-12T00:34:42.205620Z","steps":["trace[1117999605] 'agreement among raft nodes before linearized reading'  (duration: 413.672343ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T00:34:42.205781Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-12T00:34:41.788931Z","time spent":"416.720035ms","remote":"127.0.0.1:47618","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-12T00:40:00.318574Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1088}
	{"level":"info","ts":"2025-12-12T00:40:00.345367Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1088,"took":"26.337928ms","hash":3230912212,"current-db-size-bytes":3514368,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1544192,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-12-12T00:40:00.345436Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3230912212,"revision":1088,"compact-revision":-1}
	
	
	==> etcd [bc85ff6f54fd348d21d8abade09d9f8078de8f5f5a07f985010e91fb3f1a86bf] <==
	{"level":"warn","ts":"2025-12-12T00:28:57.450613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.462179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.479654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.486013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.504265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.519819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:28:57.568649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43106","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T00:29:21.122004Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T00:29:21.122053Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-582645","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	{"level":"error","ts":"2025-12-12T00:29:21.122416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T00:29:21.262178Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T00:29:21.263857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.263911Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"6fb28b9aae66857a"}
	{"level":"warn","ts":"2025-12-12T00:29:21.263916Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-12T00:29:21.263990Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-12T00:29:21.264003Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T00:29:21.264013Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.264017Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T00:29:21.264065Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T00:29:21.264122Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T00:29:21.264149Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.189:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.268166Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"error","ts":"2025-12-12T00:29:21.268250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.189:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T00:29:21.268275Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2025-12-12T00:29:21.268281Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-582645","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	
	==> kernel <==
	 00:40:30 up 13 min,  0 users,  load average: 0.16, 0.34, 0.28
	Linux functional-582645 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ff0a796b0a3adb5ccc954249762009deff44a78e07035b6d8b352d4c8acf883d] <==
	I1212 00:30:02.836366       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1212 00:30:03.046514       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 00:30:03.798461       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 00:30:03.850314       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 00:30:03.882811       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:30:03.894127       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:30:05.530734       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:30:05.630510       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 00:30:23.204290       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.21.170"}
	I1212 00:30:28.592483       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 00:30:28.766925       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.200.130"}
	I1212 00:30:29.143617       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.159.139"}
	E1212 00:31:38.387048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:58130: use of closed network connection
	I1212 00:31:42.202372       1 controller.go:667] quota admission added evaluator for: namespaces
	I1212 00:31:42.536826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.250.89"}
	I1212 00:31:42.561334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.89.97"}
	E1212 00:31:45.358449       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:58804: use of closed network connection
	I1212 00:31:45.933173       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.116.144"}
	E1212 00:34:42.393490       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:44168: use of closed network connection
	E1212 00:34:43.168959       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:44196: use of closed network connection
	E1212 00:34:44.265272       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:44220: use of closed network connection
	E1212 00:34:47.165209       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:44240: use of closed network connection
	E1212 00:34:49.842320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:54066: use of closed network connection
	E1212 00:34:54.257716       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8441->192.168.39.1:54082: use of closed network connection
	I1212 00:40:02.073289       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [65abec4fb96748b7fa9911fa2a917ae3950c5c447e2c1051c20abe79b02e64ce] <==
	I1212 00:30:05.282707       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.280338       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282718       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282723       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282728       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282732       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282738       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282743       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282713       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.280323       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.282700       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.296675       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:30:05.324181       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.383401       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:05.383434       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1212 00:30:05.383440       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1212 00:30:05.399283       1 shared_informer.go:377] "Caches are synced"
	E1212 00:31:42.327513       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.337995       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.339714       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.347532       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.351546       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.363974       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.364000       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1212 00:31:42.375131       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [6ff93ec24b4c3f630d59692aaf1154438061234fbe88a01cad93b87bf208f829] <==
	I1212 00:29:38.027721       1 serving.go:386] Generated self-signed cert in-memory
	I1212 00:29:38.063805       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1212 00:29:38.068179       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:29:38.071923       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 00:29:38.072251       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 00:29:38.072318       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 00:29:38.072334       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:29:59.377257       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.189:8441/healthz\": dial tcp 192.168.39.189:8441: connect: connection refused"
	
	
	==> kube-proxy [5bd3a1916e05c3f9c08810bdb69f42f5339dd4a154ac3a8f72236927ae79c0be] <==
	I1212 00:30:03.781889       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:30:03.886002       1 shared_informer.go:377] "Caches are synced"
	I1212 00:30:03.886234       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.189"]
	E1212 00:30:03.886447       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:30:03.952877       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 00:30:03.953174       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 00:30:03.953212       1 server_linux.go:136] "Using iptables Proxier"
	I1212 00:30:03.977980       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:30:03.978444       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1212 00:30:03.978472       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:30:03.999781       1 config.go:200] "Starting service config controller"
	I1212 00:30:04.000152       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:30:04.000574       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:30:04.001887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:30:04.002014       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:30:04.002020       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:30:04.073340       1 config.go:309] "Starting node config controller"
	I1212 00:30:04.073393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:30:04.073402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:30:04.104533       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:30:04.104769       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 00:30:04.104800       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5c8dc4e95a84aabc830309c1fa11f77d8bfef0d69619fb0399e45d1cc4b9d16d] <==
	
	
	==> kube-scheduler [6184aa047ea588a73603848444d576c82fc9a2d9965a8535c1c1c69cad4c8b45] <==
	E1212 00:30:02.008282       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:30:02.009419       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 00:30:02.011270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:30:02.011362       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:30:02.011418       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1212 00:30:02.061223       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1212 00:30:02.061355       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1212 00:30:02.065392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1212 00:30:02.066247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1212 00:30:02.066613       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1212 00:30:02.066711       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1212 00:30:02.066787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1212 00:30:02.066859       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1212 00:30:02.066943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1212 00:30:02.067000       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1212 00:30:02.067070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1212 00:30:02.067846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1212 00:30:02.068457       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1212 00:30:02.068580       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1212 00:30:02.068649       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1212 00:30:02.068718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1212 00:30:02.068847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1212 00:30:02.068934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1212 00:30:02.068472       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1212 00:30:03.990533       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [bc047dec8fa6bf5fc478fa49486e194bea28167e08ebd0dbf49ab377d4a36f59] <==
	I1212 00:28:58.283931       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:28:58.284419       1 shared_informer.go:370] "Waiting for caches to sync"
	I1212 00:28:58.284484       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:28:58.321421       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326389       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326471       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326516       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1212 00:28:58.326543       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326572       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326603       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326632       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326660       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1212 00:28:58.326684       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326708       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326738       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1212 00:28:58.326798       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1212 00:28:58.326827       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1212 00:28:58.326857       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326877       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1212 00:28:58.326942       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	I1212 00:28:58.384698       1 shared_informer.go:377] "Caches are synced"
	I1212 00:29:21.165005       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 00:29:21.165051       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 00:29:21.165065       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 00:29:21.168740       1 server.go:265] "[graceful-termination] secure server is exiting"
	
	
	==> kubelet <==
	Dec 12 00:39:50 functional-582645 kubelet[7000]: E1212 00:39:50.902351    7000 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 12 00:39:50 functional-582645 kubelet[7000]: E1212 00:39:50.902422    7000 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 12 00:39:50 functional-582645 kubelet[7000]: E1212 00:39:50.902755    7000 kuberuntime_manager.go:1664] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-b84665fb8-6hdv4_kubernetes-dashboard(8b0ed4a1-5dbe-43e8-b613-430d3e3514df): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:39:50 functional-582645 kubelet[7000]: E1212 00:39:50.902825    7000 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" podUID="8b0ed4a1-5dbe-43e8-b613-430d3e3514df"
	Dec 12 00:39:54 functional-582645 kubelet[7000]: E1212 00:39:54.954938    7000 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-4j55r" podUID="9632c02d-df40-4bcb-a4df-5859bd3fb7db"
	Dec 12 00:39:59 functional-582645 kubelet[7000]: E1212 00:39:59.342376    7000 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765499999341856382  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:39:59 functional-582645 kubelet[7000]: E1212 00:39:59.342419    7000 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765499999341856382  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:40:04 functional-582645 kubelet[7000]: E1212 00:40:04.955266    7000 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" containerName="kubernetes-dashboard"
	Dec 12 00:40:04 functional-582645 kubelet[7000]: E1212 00:40:04.960921    7000 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" podUID="8b0ed4a1-5dbe-43e8-b613-430d3e3514df"
	Dec 12 00:40:09 functional-582645 kubelet[7000]: E1212 00:40:09.344937    7000 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765500009344440017  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:40:09 functional-582645 kubelet[7000]: E1212 00:40:09.344992    7000 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765500009344440017  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:40:16 functional-582645 kubelet[7000]: E1212 00:40:16.959314    7000 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-582645" containerName="kube-apiserver"
	Dec 12 00:40:16 functional-582645 kubelet[7000]: E1212 00:40:16.959738    7000 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" containerName="kubernetes-dashboard"
	Dec 12 00:40:16 functional-582645 kubelet[7000]: E1212 00:40:16.965814    7000 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" podUID="8b0ed4a1-5dbe-43e8-b613-430d3e3514df"
	Dec 12 00:40:19 functional-582645 kubelet[7000]: E1212 00:40:19.347736    7000 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765500019347307848  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:40:19 functional-582645 kubelet[7000]: E1212 00:40:19.347779    7000 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765500019347307848  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:40:21 functional-582645 kubelet[7000]: E1212 00:40:21.001477    7000 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 12 00:40:21 functional-582645 kubelet[7000]: E1212 00:40:21.001566    7000 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 12 00:40:21 functional-582645 kubelet[7000]: E1212 00:40:21.001878    7000 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-8cddf_default(447df605-a8b4-4c5a-8e24-744096296393): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 12 00:40:21 functional-582645 kubelet[7000]: E1212 00:40:21.001917    7000 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-8cddf" podUID="447df605-a8b4-4c5a-8e24-744096296393"
	Dec 12 00:40:27 functional-582645 kubelet[7000]: E1212 00:40:27.954934    7000 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" containerName="kubernetes-dashboard"
	Dec 12 00:40:27 functional-582645 kubelet[7000]: E1212 00:40:27.955178    7000 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-582645" containerName="etcd"
	Dec 12 00:40:27 functional-582645 kubelet[7000]: E1212 00:40:27.957482    7000 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6hdv4" podUID="8b0ed4a1-5dbe-43e8-b613-430d3e3514df"
	Dec 12 00:40:29 functional-582645 kubelet[7000]: E1212 00:40:29.350986    7000 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765500029350712576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	Dec 12 00:40:29 functional-582645 kubelet[7000]: E1212 00:40:29.351007    7000 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765500029350712576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240715}  inodes_used:{value:104}}"
	
	
	==> storage-provisioner [1e9ab6a45ca7b55074f7acb53c011ef27b8b712a0f5cb2ce4177c9903327edf7] <==
	I1212 00:29:37.563740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 00:29:37.569698       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [dfe4a0733242f3d22152fd7bbcdd733a47cbdbcc2fe129181f332892dd54fb8e] <==
	W1212 00:40:06.340815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:08.344985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:08.350802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:10.354903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:10.365551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:12.370657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:12.377393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:14.381828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:14.391534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:16.395520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:16.404063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:18.408948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:18.414491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:20.419419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:20.432046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:22.435698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:22.444670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:24.449031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:24.454511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:26.457816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:26.463998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:28.467932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:28.473663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:30.478985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:40:30.487406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-582645 -n functional-582645
helpers_test.go:270: (dbg) Run:  kubectl --context functional-582645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-582645 describe pod busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-582645 describe pod busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4: exit status 1 (109.427754ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-582645/192.168.39.189
	Start Time:       Fri, 12 Dec 2025 00:30:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://00f29659031313af80a49378109a4b2fdf07e8f1619330e3211e758790d90313
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 12 Dec 2025 00:31:34 +0000
	      Finished:     Fri, 12 Dec 2025 00:31:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft9ml (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ft9ml:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m55s  default-scheduler  Successfully assigned default/busybox-mount to functional-582645
	  Normal  Pulling    9m54s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m57s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.148s (57.323s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m57s  kubelet            Container created
	  Normal  Started    8m57s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-4j55r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-582645/192.168.39.189
	Start Time:       Fri, 12 Dec 2025 00:30:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cflqj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cflqj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-5758569b79-4j55r to functional-582645
	  Warning  Failed     4m25s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     115s (x3 over 9m2s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     115s (x4 over 9m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x11 over 9m1s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     37s (x11 over 9m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-8cddf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-582645/192.168.39.189
	Start Time:       Fri, 12 Dec 2025 00:30:28 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6mhqr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6mhqr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-8cddf to functional-582645
	  Warning  Failed     9m32s                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m25s (x2 over 7m57s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    67s (x11 over 9m31s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     67s (x11 over 9m31s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    52s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x5 over 9m32s)    kubelet            Error: ErrImagePull
	  Warning  Failed     10s (x2 over 5m26s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-k5xx5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6hdv4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-582645 describe pod busybox-mount hello-node-5758569b79-4j55r hello-node-connect-9f67c86d4-8cddf dashboard-metrics-scraper-5565989548-k5xx5 kubernetes-dashboard-b84665fb8-6hdv4: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-582645 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-582645 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-4j55r" [9632c02d-df40-4bcb-a4df-5859bd3fb7db] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-582645 -n functional-582645
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-12 00:40:29.463891152 +0000 UTC m=+2694.173524129
functional_test.go:1460: (dbg) Run:  kubectl --context functional-582645 describe po hello-node-5758569b79-4j55r -n default
functional_test.go:1460: (dbg) kubectl --context functional-582645 describe po hello-node-5758569b79-4j55r -n default:
Name:             hello-node-5758569b79-4j55r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-582645/192.168.39.189
Start Time:       Fri, 12 Dec 2025 00:30:29 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cflqj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cflqj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-4j55r to functional-582645
Warning  Failed     4m23s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     113s (x3 over 9m)     kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     113s (x4 over 9m)     kubelet            Error: ErrImagePull
Normal   BackOff    35s (x11 over 8m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     35s (x11 over 8m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    21s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-582645 logs hello-node-5758569b79-4j55r -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-582645 logs hello-node-5758569b79-4j55r -n default: exit status 1 (78.179324ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-4j55r" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-582645 logs hello-node-5758569b79-4j55r -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 service --namespace=default --https --url hello-node: exit status 115 (282.794283ms)

                                                
                                                
-- stdout --
	https://192.168.39.189:32079
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-582645 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 service hello-node --url --format={{.IP}}: exit status 115 (268.077055ms)

                                                
                                                
-- stdout --
	192.168.39.189
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-582645 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 service hello-node --url: exit status 115 (276.00757ms)

                                                
                                                
-- stdout --
	http://192.168.39.189:32079
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-582645 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.189:32079
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestPreload (160.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-209368 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1212 01:21:52.761679  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:22:09.688073  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:22:54.483952  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-209368 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m40.568697742s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209368 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-209368 image pull gcr.io/k8s-minikube/busybox: (2.709904496s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-209368
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-209368: (8.251427986s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-209368 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-209368 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (46.271396952s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209368 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-12 01:24:18.741412963 +0000 UTC m=+5323.451045939
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-209368 -n test-preload-209368
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209368 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p test-preload-209368 logs -n 25: (1.091056764s)
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-074388 ssh -n multinode-074388-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:09 UTC │
	│ ssh     │ multinode-074388 ssh -n multinode-074388 sudo cat /home/docker/cp-test_multinode-074388-m03_multinode-074388.txt                                          │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:09 UTC │
	│ cp      │ multinode-074388 cp multinode-074388-m03:/home/docker/cp-test.txt multinode-074388-m02:/home/docker/cp-test_multinode-074388-m03_multinode-074388-m02.txt │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:09 UTC │
	│ ssh     │ multinode-074388 ssh -n multinode-074388-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:09 UTC │
	│ ssh     │ multinode-074388 ssh -n multinode-074388-m02 sudo cat /home/docker/cp-test_multinode-074388-m03_multinode-074388-m02.txt                                  │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:09 UTC │
	│ node    │ multinode-074388 node stop m03                                                                                                                            │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:09 UTC │
	│ node    │ multinode-074388 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:09 UTC │ 12 Dec 25 01:10 UTC │
	│ node    │ list -p multinode-074388                                                                                                                                  │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:10 UTC │                     │
	│ stop    │ -p multinode-074388                                                                                                                                       │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:10 UTC │ 12 Dec 25 01:13 UTC │
	│ start   │ -p multinode-074388 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:13 UTC │ 12 Dec 25 01:16 UTC │
	│ node    │ list -p multinode-074388                                                                                                                                  │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:16 UTC │                     │
	│ node    │ multinode-074388 node delete m03                                                                                                                          │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:16 UTC │ 12 Dec 25 01:16 UTC │
	│ stop    │ multinode-074388 stop                                                                                                                                     │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:16 UTC │ 12 Dec 25 01:18 UTC │
	│ start   │ -p multinode-074388 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:18 UTC │ 12 Dec 25 01:20 UTC │
	│ node    │ list -p multinode-074388                                                                                                                                  │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:20 UTC │                     │
	│ start   │ -p multinode-074388-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-074388-m02 │ jenkins │ v1.37.0 │ 12 Dec 25 01:20 UTC │                     │
	│ start   │ -p multinode-074388-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-074388-m03 │ jenkins │ v1.37.0 │ 12 Dec 25 01:20 UTC │ 12 Dec 25 01:21 UTC │
	│ node    │ add -p multinode-074388                                                                                                                                   │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │                     │
	│ delete  │ -p multinode-074388-m03                                                                                                                                   │ multinode-074388-m03 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:21 UTC │
	│ delete  │ -p multinode-074388                                                                                                                                       │ multinode-074388     │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:21 UTC │
	│ start   │ -p test-preload-209368 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-209368  │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ test-preload-209368 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-209368  │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p test-preload-209368                                                                                                                                    │ test-preload-209368  │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p test-preload-209368 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-209368  │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ test-preload-209368 image list                                                                                                                            │ test-preload-209368  │ jenkins │ v1.37.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:23:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:23:32.326376  226740 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:23:32.326658  226740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:23:32.326668  226740 out.go:374] Setting ErrFile to fd 2...
	I1212 01:23:32.326672  226740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:23:32.326896  226740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:23:32.327374  226740 out.go:368] Setting JSON to false
	I1212 01:23:32.328308  226740 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25556,"bootTime":1765477056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:23:32.328370  226740 start.go:143] virtualization: kvm guest
	I1212 01:23:32.330398  226740 out.go:179] * [test-preload-209368] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 01:23:32.331660  226740 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:23:32.331684  226740 notify.go:221] Checking for updates...
	I1212 01:23:32.334068  226740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:23:32.335544  226740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:23:32.336809  226740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 01:23:32.338123  226740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:23:32.339322  226740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:23:32.341306  226740 config.go:182] Loaded profile config "test-preload-209368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:23:32.342032  226740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:23:32.380670  226740 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 01:23:32.382086  226740 start.go:309] selected driver: kvm2
	I1212 01:23:32.382104  226740 start.go:927] validating driver "kvm2" against &{Name:test-preload-209368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-209368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:32.382238  226740 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:23:32.383692  226740 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:23:32.383735  226740 cni.go:84] Creating CNI manager for ""
	I1212 01:23:32.383831  226740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:23:32.383889  226740 start.go:353] cluster config:
	{Name:test-preload-209368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-209368 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:32.384022  226740 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:23:32.385425  226740 out.go:179] * Starting "test-preload-209368" primary control-plane node in "test-preload-209368" cluster
	I1212 01:23:32.386575  226740 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:23:32.386630  226740 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 01:23:32.386640  226740 cache.go:65] Caching tarball of preloaded images
	I1212 01:23:32.386764  226740 preload.go:238] Found /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 01:23:32.386777  226740 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 01:23:32.386868  226740 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/config.json ...
	I1212 01:23:32.387115  226740 start.go:360] acquireMachinesLock for test-preload-209368: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:23:32.387167  226740 start.go:364] duration metric: took 28.867µs to acquireMachinesLock for "test-preload-209368"
	I1212 01:23:32.387183  226740 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:23:32.387189  226740 fix.go:54] fixHost starting: 
	I1212 01:23:32.389049  226740 fix.go:112] recreateIfNeeded on test-preload-209368: state=Stopped err=<nil>
	W1212 01:23:32.389074  226740 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:23:32.390806  226740 out.go:252] * Restarting existing kvm2 VM for "test-preload-209368" ...
	I1212 01:23:32.390870  226740 main.go:143] libmachine: starting domain...
	I1212 01:23:32.390886  226740 main.go:143] libmachine: ensuring networks are active...
	I1212 01:23:32.391902  226740 main.go:143] libmachine: Ensuring network default is active
	I1212 01:23:32.392389  226740 main.go:143] libmachine: Ensuring network mk-test-preload-209368 is active
	I1212 01:23:32.392790  226740 main.go:143] libmachine: getting domain XML...
	I1212 01:23:32.393996  226740 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-209368</name>
	  <uuid>3ee160f3-cc58-44cc-a274-7490cbfd7c3b</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/test-preload-209368.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d0:df:02'/>
	      <source network='mk-test-preload-209368'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:30:45:73'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 01:23:33.706703  226740 main.go:143] libmachine: waiting for domain to start...
	I1212 01:23:33.708338  226740 main.go:143] libmachine: domain is now running
	I1212 01:23:33.708358  226740 main.go:143] libmachine: waiting for IP...
	I1212 01:23:33.709263  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:33.709931  226740 main.go:143] libmachine: domain test-preload-209368 has current primary IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:33.709947  226740 main.go:143] libmachine: found domain IP: 192.168.39.115
	I1212 01:23:33.709954  226740 main.go:143] libmachine: reserving static IP address...
	I1212 01:23:33.710321  226740 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-209368", mac: "52:54:00:d0:df:02", ip: "192.168.39.115"} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:21:57 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:33.710346  226740 main.go:143] libmachine: skip adding static IP to network mk-test-preload-209368 - found existing host DHCP lease matching {name: "test-preload-209368", mac: "52:54:00:d0:df:02", ip: "192.168.39.115"}
	I1212 01:23:33.710358  226740 main.go:143] libmachine: reserved static IP address 192.168.39.115 for domain test-preload-209368
	I1212 01:23:33.710365  226740 main.go:143] libmachine: waiting for SSH...
	I1212 01:23:33.710371  226740 main.go:143] libmachine: Getting to WaitForSSH function...
	I1212 01:23:33.712815  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:33.713220  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:21:57 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:33.713241  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:33.713391  226740 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:33.713616  226740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I1212 01:23:33.713626  226740 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1212 01:23:36.765744  226740 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.115:22: connect: no route to host
	I1212 01:23:42.845851  226740 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.115:22: connect: no route to host
	I1212 01:23:45.848903  226740 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.115:22: connect: connection refused
	I1212 01:23:48.956586  226740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:23:48.960453  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:48.961066  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:48.961108  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:48.961424  226740 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/config.json ...
	I1212 01:23:48.961689  226740 machine.go:94] provisionDockerMachine start ...
	I1212 01:23:48.964395  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:48.964860  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:48.964886  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:48.965078  226740 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:48.965294  226740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I1212 01:23:48.965304  226740 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:23:49.071184  226740 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:23:49.071214  226740 buildroot.go:166] provisioning hostname "test-preload-209368"
	I1212 01:23:49.074347  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.074807  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.074843  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.075012  226740 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:49.075216  226740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I1212 01:23:49.075227  226740 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-209368 && echo "test-preload-209368" | sudo tee /etc/hostname
	I1212 01:23:49.200505  226740 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-209368
	
	I1212 01:23:49.203550  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.203949  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.203973  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.204169  226740 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:49.204423  226740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I1212 01:23:49.204439  226740 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-209368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-209368/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-209368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:23:49.321684  226740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:23:49.321721  226740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22101-186349/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-186349/.minikube}
	I1212 01:23:49.321782  226740 buildroot.go:174] setting up certificates
	I1212 01:23:49.321801  226740 provision.go:84] configureAuth start
	I1212 01:23:49.325222  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.325652  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.325685  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.328340  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.328742  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.328769  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.328908  226740 provision.go:143] copyHostCerts
	I1212 01:23:49.328985  226740 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem, removing ...
	I1212 01:23:49.329006  226740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem
	I1212 01:23:49.329085  226740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem (1082 bytes)
	I1212 01:23:49.329224  226740 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem, removing ...
	I1212 01:23:49.329237  226740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem
	I1212 01:23:49.329283  226740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem (1123 bytes)
	I1212 01:23:49.329365  226740 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem, removing ...
	I1212 01:23:49.329376  226740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem
	I1212 01:23:49.329417  226740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem (1675 bytes)
	I1212 01:23:49.329509  226740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem org=jenkins.test-preload-209368 san=[127.0.0.1 192.168.39.115 localhost minikube test-preload-209368]
	I1212 01:23:49.453040  226740 provision.go:177] copyRemoteCerts
	I1212 01:23:49.453133  226740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:23:49.456208  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.456652  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.456686  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.456871  226740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/id_rsa Username:docker}
	I1212 01:23:49.543990  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:23:49.576909  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 01:23:49.610927  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:23:49.643969  226740 provision.go:87] duration metric: took 322.150674ms to configureAuth
	I1212 01:23:49.644005  226740 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:23:49.644202  226740 config.go:182] Loaded profile config "test-preload-209368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:23:49.647637  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.648141  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.648167  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.648389  226740 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:49.648659  226740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I1212 01:23:49.648675  226740 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:23:49.913579  226740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:23:49.913611  226740 machine.go:97] duration metric: took 951.90673ms to provisionDockerMachine
	I1212 01:23:49.913627  226740 start.go:293] postStartSetup for "test-preload-209368" (driver="kvm2")
	I1212 01:23:49.913641  226740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:23:49.913698  226740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:23:49.916931  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.917498  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:49.917526  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:49.917672  226740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/id_rsa Username:docker}
	I1212 01:23:50.003795  226740 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:23:50.010437  226740 info.go:137] Remote host: Buildroot 2025.02
	I1212 01:23:50.010494  226740 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/addons for local assets ...
	I1212 01:23:50.010570  226740 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/files for local assets ...
	I1212 01:23:50.010670  226740 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem -> 1902722.pem in /etc/ssl/certs
	I1212 01:23:50.010777  226740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:23:50.025395  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem --> /etc/ssl/certs/1902722.pem (1708 bytes)
	I1212 01:23:50.062755  226740 start.go:296] duration metric: took 149.10812ms for postStartSetup
	I1212 01:23:50.062805  226740 fix.go:56] duration metric: took 17.675616506s for fixHost
	I1212 01:23:50.066337  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.066887  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:50.066920  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.067231  226740 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:50.067521  226740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I1212 01:23:50.067536  226740 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:23:50.175976  226740 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765502630.138245491
	
	I1212 01:23:50.176003  226740 fix.go:216] guest clock: 1765502630.138245491
	I1212 01:23:50.176012  226740 fix.go:229] Guest: 2025-12-12 01:23:50.138245491 +0000 UTC Remote: 2025-12-12 01:23:50.062810031 +0000 UTC m=+17.789989542 (delta=75.43546ms)
	I1212 01:23:50.176034  226740 fix.go:200] guest clock delta is within tolerance: 75.43546ms
	I1212 01:23:50.176040  226740 start.go:83] releasing machines lock for "test-preload-209368", held for 17.788863611s
	I1212 01:23:50.179299  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.179705  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:50.179737  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.180312  226740 ssh_runner.go:195] Run: cat /version.json
	I1212 01:23:50.180404  226740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:23:50.183766  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.183851  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.184293  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:50.184313  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:50.184336  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.184333  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:50.184593  226740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/id_rsa Username:docker}
	I1212 01:23:50.184601  226740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/id_rsa Username:docker}
	I1212 01:23:50.285567  226740 ssh_runner.go:195] Run: systemctl --version
	I1212 01:23:50.292859  226740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:23:50.447721  226740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:23:50.457250  226740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:23:50.457319  226740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:23:50.485205  226740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:23:50.485232  226740 start.go:496] detecting cgroup driver to use...
	I1212 01:23:50.485320  226740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:23:50.510301  226740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:23:50.531419  226740 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:23:50.531512  226740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:23:50.553742  226740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:23:50.572662  226740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:23:50.728155  226740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:23:50.964589  226740 docker.go:234] disabling docker service ...
	I1212 01:23:50.964667  226740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:23:50.985880  226740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:23:51.003963  226740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:23:51.179100  226740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:23:51.337884  226740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:23:51.355849  226740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:23:51.381862  226740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 01:23:51.381951  226740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.396693  226740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:23:51.396783  226740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.411155  226740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.425727  226740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.441011  226740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:23:51.456482  226740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.471230  226740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.495544  226740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:51.509915  226740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:23:51.522697  226740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:23:51.522772  226740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:23:51.546532  226740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:23:51.560564  226740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:51.710181  226740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:23:51.837681  226740 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:23:51.837767  226740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:23:51.843893  226740 start.go:564] Will wait 60s for crictl version
	I1212 01:23:51.843973  226740 ssh_runner.go:195] Run: which crictl
	I1212 01:23:51.849264  226740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:23:51.889080  226740 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:23:51.889192  226740 ssh_runner.go:195] Run: crio --version
	I1212 01:23:51.922355  226740 ssh_runner.go:195] Run: crio --version
	I1212 01:23:51.955671  226740 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 01:23:51.960078  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:51.960541  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:23:51.960565  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:23:51.960753  226740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:23:51.966498  226740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:51.983582  226740 kubeadm.go:884] updating cluster {Name:test-preload-209368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-209368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:23:51.983733  226740 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:23:51.983785  226740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:52.020598  226740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1212 01:23:52.020672  226740 ssh_runner.go:195] Run: which lz4
	I1212 01:23:52.025920  226740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:23:52.031822  226740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:23:52.031871  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1212 01:23:53.575255  226740 crio.go:462] duration metric: took 1.54938894s to copy over tarball
	I1212 01:23:53.575329  226740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:23:55.155035  226740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.579679208s)
	I1212 01:23:55.155068  226740 crio.go:469] duration metric: took 1.579780631s to extract the tarball
	I1212 01:23:55.155079  226740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:23:55.193982  226740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:55.239029  226740 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:23:55.239055  226740 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:23:55.239063  226740 kubeadm.go:935] updating node { 192.168.39.115 8443 v1.34.2 crio true true} ...
	I1212 01:23:55.239244  226740 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-209368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-209368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:23:55.239359  226740 ssh_runner.go:195] Run: crio config
	I1212 01:23:55.288504  226740 cni.go:84] Creating CNI manager for ""
	I1212 01:23:55.288530  226740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:23:55.288549  226740 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:23:55.288579  226740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-209368 NodeName:test-preload-209368 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:23:55.288755  226740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-209368"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.115"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:23:55.288845  226740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:23:55.302693  226740 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:23:55.302766  226740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:23:55.316087  226740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1212 01:23:55.339765  226740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:23:55.363188  226740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1212 01:23:55.387992  226740 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I1212 01:23:55.393547  226740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:55.411999  226740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:55.562244  226740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:23:55.599260  226740 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368 for IP: 192.168.39.115
	I1212 01:23:55.599285  226740 certs.go:195] generating shared ca certs ...
	I1212 01:23:55.599304  226740 certs.go:227] acquiring lock for ca certs: {Name:mkdc58adfd2cc299a76aeec81ac0d7f7d2a38e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:55.599502  226740 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key
	I1212 01:23:55.599545  226740 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key
	I1212 01:23:55.599555  226740 certs.go:257] generating profile certs ...
	I1212 01:23:55.599648  226740 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.key
	I1212 01:23:55.599712  226740 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/apiserver.key.998ec6b8
	I1212 01:23:55.599754  226740 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/proxy-client.key
	I1212 01:23:55.599890  226740 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/190272.pem (1338 bytes)
	W1212 01:23:55.599931  226740 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-186349/.minikube/certs/190272_empty.pem, impossibly tiny 0 bytes
	I1212 01:23:55.599938  226740 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:23:55.599972  226740 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:23:55.600003  226740 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:23:55.600026  226740 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem (1675 bytes)
	I1212 01:23:55.600069  226740 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem (1708 bytes)
	I1212 01:23:55.600820  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:23:55.647036  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:23:55.685071  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:23:55.719189  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 01:23:55.754358  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:23:55.791813  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:23:55.831311  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:23:55.870421  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:23:55.909679  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:23:55.945434  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/190272.pem --> /usr/share/ca-certificates/190272.pem (1338 bytes)
	I1212 01:23:55.980754  226740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem --> /usr/share/ca-certificates/1902722.pem (1708 bytes)
	I1212 01:23:56.015672  226740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:23:56.040743  226740 ssh_runner.go:195] Run: openssl version
	I1212 01:23:56.048068  226740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1902722.pem
	I1212 01:23:56.061237  226740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1902722.pem /etc/ssl/certs/1902722.pem
	I1212 01:23:56.074712  226740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902722.pem
	I1212 01:23:56.081313  226740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:27 /usr/share/ca-certificates/1902722.pem
	I1212 01:23:56.081395  226740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902722.pem
	I1212 01:23:56.090402  226740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:23:56.103913  226740 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1902722.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:23:56.117271  226740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:56.131066  226740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:23:56.144611  226740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:56.150666  226740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:56.150735  226740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:56.159337  226740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:23:56.172858  226740 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:23:56.186544  226740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/190272.pem
	I1212 01:23:56.200218  226740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/190272.pem /etc/ssl/certs/190272.pem
	I1212 01:23:56.213791  226740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/190272.pem
	I1212 01:23:56.219832  226740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:27 /usr/share/ca-certificates/190272.pem
	I1212 01:23:56.219916  226740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/190272.pem
	I1212 01:23:56.228099  226740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:23:56.241433  226740 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/190272.pem /etc/ssl/certs/51391683.0
	I1212 01:23:56.255488  226740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:23:56.262449  226740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:23:56.271109  226740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:23:56.279598  226740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:23:56.288262  226740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:23:56.297419  226740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:23:56.306243  226740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:23:56.315628  226740 kubeadm.go:401] StartCluster: {Name:test-preload-209368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-209368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:56.315752  226740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:23:56.315825  226740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:23:56.365238  226740 cri.go:89] found id: ""
	I1212 01:23:56.365347  226740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:23:56.385519  226740 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:23:56.385541  226740 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:23:56.385589  226740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:23:56.402120  226740 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:23:56.402728  226740 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-209368" does not appear in /home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:23:56.402909  226740 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-186349/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-209368" cluster setting kubeconfig missing "test-preload-209368" context setting]
	I1212 01:23:56.403295  226740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/kubeconfig: {Name:mkdf9d6588b522077beb3bc03f9eff4a2b248de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:56.404053  226740 kapi.go:59] client config for test-preload-209368: &rest.Config{Host:"https://192.168.39.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.key", CAFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:23:56.404651  226740 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 01:23:56.404668  226740 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 01:23:56.404675  226740 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 01:23:56.404679  226740 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 01:23:56.404686  226740 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 01:23:56.405102  226740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:23:56.426442  226740 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.115
	I1212 01:23:56.426510  226740 kubeadm.go:1161] stopping kube-system containers ...
	I1212 01:23:56.426527  226740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:23:56.426591  226740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:23:56.466195  226740 cri.go:89] found id: ""
	I1212 01:23:56.466271  226740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:23:56.486125  226740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:23:56.499968  226740 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:23:56.499990  226740 kubeadm.go:158] found existing configuration files:
	
	I1212 01:23:56.500037  226740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:23:56.513029  226740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:23:56.513095  226740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:23:56.526830  226740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:23:56.539378  226740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:23:56.539483  226740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:23:56.553314  226740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:23:56.565610  226740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:23:56.565690  226740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:23:56.579447  226740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:23:56.592029  226740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:23:56.592097  226740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:23:56.605544  226740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:23:56.619912  226740 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:56.685320  226740 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:57.843416  226740 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158049754s)
	I1212 01:23:57.843578  226740 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:58.137952  226740 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:58.220777  226740 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:58.315406  226740 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:23:58.315548  226740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:58.815782  226740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:59.315894  226740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:59.816402  226740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:00.316360  226740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:00.381204  226740 api_server.go:72] duration metric: took 2.065814746s to wait for apiserver process to appear ...
	I1212 01:24:00.381244  226740 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:24:00.381268  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:00.381813  226740 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I1212 01:24:00.881550  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:03.167448  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:24:03.167535  226740 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:24:03.167551  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:03.215965  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:24:03.215998  226740 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:24:03.381350  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:03.391166  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:24:03.391202  226740 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:24:03.882034  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:03.887593  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:24:03.887628  226740 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:24:04.382329  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:04.397887  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:24:04.397929  226740 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:24:04.881636  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:04.887324  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I1212 01:24:04.894482  226740 api_server.go:141] control plane version: v1.34.2
	I1212 01:24:04.894515  226740 api_server.go:131] duration metric: took 4.513264003s to wait for apiserver health ...
	I1212 01:24:04.894526  226740 cni.go:84] Creating CNI manager for ""
	I1212 01:24:04.894532  226740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:24:04.896376  226740 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:24:04.897630  226740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:24:04.915661  226740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:24:04.944046  226740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:24:04.951510  226740 system_pods.go:59] 7 kube-system pods found
	I1212 01:24:04.951547  226740 system_pods.go:61] "coredns-66bc5c9577-9hm47" [96faeeac-4463-44c8-95a9-9f9bc62676b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:24:04.951557  226740 system_pods.go:61] "etcd-test-preload-209368" [b198f168-6fa6-4759-8d3c-c9afb1cee7b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:24:04.951564  226740 system_pods.go:61] "kube-apiserver-test-preload-209368" [2e771e95-3214-4af1-8c22-2971c3095af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:24:04.951570  226740 system_pods.go:61] "kube-controller-manager-test-preload-209368" [ed44ac32-8cd2-4a49-b21b-686c09ff335b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:24:04.951576  226740 system_pods.go:61] "kube-proxy-jqjgs" [b77693a9-f717-4a84-942a-62423b4bd20c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:24:04.951581  226740 system_pods.go:61] "kube-scheduler-test-preload-209368" [c6ebfa10-3a58-4f3b-84d8-2e6639578612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:24:04.951585  226740 system_pods.go:61] "storage-provisioner" [cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:24:04.951592  226740 system_pods.go:74] duration metric: took 7.515762ms to wait for pod list to return data ...
	I1212 01:24:04.951600  226740 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:24:04.961580  226740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:24:04.961616  226740 node_conditions.go:123] node cpu capacity is 2
	I1212 01:24:04.961629  226740 node_conditions.go:105] duration metric: took 10.0257ms to run NodePressure ...
	I1212 01:24:04.961681  226740 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:24:05.248980  226740 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1212 01:24:05.252774  226740 kubeadm.go:744] kubelet initialised
	I1212 01:24:05.252798  226740 kubeadm.go:745] duration metric: took 3.788913ms waiting for restarted kubelet to initialise ...
	I1212 01:24:05.252816  226740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:24:05.275288  226740 ops.go:34] apiserver oom_adj: -16
	I1212 01:24:05.275319  226740 kubeadm.go:602] duration metric: took 8.889770554s to restartPrimaryControlPlane
	I1212 01:24:05.275332  226740 kubeadm.go:403] duration metric: took 8.959716335s to StartCluster
	I1212 01:24:05.275351  226740 settings.go:142] acquiring lock: {Name:mkc54bc00cde7f692cc672e67ab0af4ae6a15c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:05.275440  226740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:24:05.276087  226740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-186349/kubeconfig: {Name:mkdf9d6588b522077beb3bc03f9eff4a2b248de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:05.276356  226740 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:24:05.276437  226740 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:24:05.276548  226740 addons.go:70] Setting storage-provisioner=true in profile "test-preload-209368"
	I1212 01:24:05.276572  226740 addons.go:239] Setting addon storage-provisioner=true in "test-preload-209368"
	W1212 01:24:05.276580  226740 addons.go:248] addon storage-provisioner should already be in state true
	I1212 01:24:05.276583  226740 addons.go:70] Setting default-storageclass=true in profile "test-preload-209368"
	I1212 01:24:05.276613  226740 host.go:66] Checking if "test-preload-209368" exists ...
	I1212 01:24:05.276612  226740 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-209368"
	I1212 01:24:05.276625  226740 config.go:182] Loaded profile config "test-preload-209368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:24:05.278171  226740 out.go:179] * Verifying Kubernetes components...
	I1212 01:24:05.279319  226740 kapi.go:59] client config for test-preload-209368: &rest.Config{Host:"https://192.168.39.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.key", CAFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:24:05.279517  226740 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:24:05.279571  226740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:24:05.279629  226740 addons.go:239] Setting addon default-storageclass=true in "test-preload-209368"
	W1212 01:24:05.279647  226740 addons.go:248] addon default-storageclass should already be in state true
	I1212 01:24:05.279669  226740 host.go:66] Checking if "test-preload-209368" exists ...
	I1212 01:24:05.280874  226740 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:24:05.280889  226740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:24:05.281376  226740 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:24:05.281392  226740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:24:05.284180  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:24:05.284631  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:24:05.284647  226740 main.go:143] libmachine: domain test-preload-209368 has defined MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:24:05.284658  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:24:05.284870  226740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/id_rsa Username:docker}
	I1212 01:24:05.285192  226740 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:df:02", ip: ""} in network mk-test-preload-209368: {Iface:virbr1 ExpiryTime:2025-12-12 02:23:45 +0000 UTC Type:0 Mac:52:54:00:d0:df:02 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-209368 Clientid:01:52:54:00:d0:df:02}
	I1212 01:24:05.285218  226740 main.go:143] libmachine: domain test-preload-209368 has defined IP address 192.168.39.115 and MAC address 52:54:00:d0:df:02 in network mk-test-preload-209368
	I1212 01:24:05.285380  226740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/test-preload-209368/id_rsa Username:docker}
	I1212 01:24:05.577584  226740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:24:05.617067  226740 node_ready.go:35] waiting up to 6m0s for node "test-preload-209368" to be "Ready" ...
	I1212 01:24:05.620410  226740 node_ready.go:49] node "test-preload-209368" is "Ready"
	I1212 01:24:05.620451  226740 node_ready.go:38] duration metric: took 3.32073ms for node "test-preload-209368" to be "Ready" ...
	I1212 01:24:05.620497  226740 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:24:05.620566  226740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:05.658347  226740 api_server.go:72] duration metric: took 381.949714ms to wait for apiserver process to appear ...
	I1212 01:24:05.658383  226740 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:24:05.658404  226740 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I1212 01:24:05.669198  226740 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I1212 01:24:05.671313  226740 api_server.go:141] control plane version: v1.34.2
	I1212 01:24:05.671344  226740 api_server.go:131] duration metric: took 12.954686ms to wait for apiserver health ...
	I1212 01:24:05.671354  226740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:24:05.676300  226740 system_pods.go:59] 7 kube-system pods found
	I1212 01:24:05.676345  226740 system_pods.go:61] "coredns-66bc5c9577-9hm47" [96faeeac-4463-44c8-95a9-9f9bc62676b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:24:05.676353  226740 system_pods.go:61] "etcd-test-preload-209368" [b198f168-6fa6-4759-8d3c-c9afb1cee7b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:24:05.676364  226740 system_pods.go:61] "kube-apiserver-test-preload-209368" [2e771e95-3214-4af1-8c22-2971c3095af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:24:05.676374  226740 system_pods.go:61] "kube-controller-manager-test-preload-209368" [ed44ac32-8cd2-4a49-b21b-686c09ff335b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:24:05.676380  226740 system_pods.go:61] "kube-proxy-jqjgs" [b77693a9-f717-4a84-942a-62423b4bd20c] Running
	I1212 01:24:05.676388  226740 system_pods.go:61] "kube-scheduler-test-preload-209368" [c6ebfa10-3a58-4f3b-84d8-2e6639578612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:24:05.676393  226740 system_pods.go:61] "storage-provisioner" [cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6] Running
	I1212 01:24:05.676404  226740 system_pods.go:74] duration metric: took 5.042621ms to wait for pod list to return data ...
	I1212 01:24:05.676419  226740 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:24:05.681135  226740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:24:05.686229  226740 default_sa.go:45] found service account: "default"
	I1212 01:24:05.686262  226740 default_sa.go:55] duration metric: took 9.83642ms for default service account to be created ...
	I1212 01:24:05.686274  226740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:24:05.694648  226740 system_pods.go:86] 7 kube-system pods found
	I1212 01:24:05.694684  226740 system_pods.go:89] "coredns-66bc5c9577-9hm47" [96faeeac-4463-44c8-95a9-9f9bc62676b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:24:05.694692  226740 system_pods.go:89] "etcd-test-preload-209368" [b198f168-6fa6-4759-8d3c-c9afb1cee7b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:24:05.694700  226740 system_pods.go:89] "kube-apiserver-test-preload-209368" [2e771e95-3214-4af1-8c22-2971c3095af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:24:05.694708  226740 system_pods.go:89] "kube-controller-manager-test-preload-209368" [ed44ac32-8cd2-4a49-b21b-686c09ff335b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:24:05.694715  226740 system_pods.go:89] "kube-proxy-jqjgs" [b77693a9-f717-4a84-942a-62423b4bd20c] Running
	I1212 01:24:05.694721  226740 system_pods.go:89] "kube-scheduler-test-preload-209368" [c6ebfa10-3a58-4f3b-84d8-2e6639578612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:24:05.694725  226740 system_pods.go:89] "storage-provisioner" [cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6] Running
	I1212 01:24:05.694734  226740 system_pods.go:126] duration metric: took 8.453595ms to wait for k8s-apps to be running ...
	I1212 01:24:05.694744  226740 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:24:05.694803  226740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:24:05.788997  226740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:24:06.498053  226740 system_svc.go:56] duration metric: took 803.300286ms WaitForService to wait for kubelet
	I1212 01:24:06.498085  226740 kubeadm.go:587] duration metric: took 1.221699226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:24:06.498104  226740 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:24:06.503215  226740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:24:06.503246  226740 node_conditions.go:123] node cpu capacity is 2
	I1212 01:24:06.503262  226740 node_conditions.go:105] duration metric: took 5.151342ms to run NodePressure ...
	I1212 01:24:06.503276  226740 start.go:242] waiting for startup goroutines ...
	I1212 01:24:06.512948  226740 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 01:24:06.514492  226740 addons.go:530] duration metric: took 1.238054125s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 01:24:06.514554  226740 start.go:247] waiting for cluster config update ...
	I1212 01:24:06.514572  226740 start.go:256] writing updated cluster config ...
	I1212 01:24:06.514875  226740 ssh_runner.go:195] Run: rm -f paused
	I1212 01:24:06.523891  226740 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:24:06.524385  226740 kapi.go:59] client config for test-preload-209368: &rest.Config{Host:"https://192.168.39.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/profiles/test-preload-209368/client.key", CAFile:"/home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:24:06.548667  226740 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9hm47" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:24:08.557771  226740 pod_ready.go:104] pod "coredns-66bc5c9577-9hm47" is not "Ready", error: <nil>
	W1212 01:24:11.055926  226740 pod_ready.go:104] pod "coredns-66bc5c9577-9hm47" is not "Ready", error: <nil>
	I1212 01:24:12.556784  226740 pod_ready.go:94] pod "coredns-66bc5c9577-9hm47" is "Ready"
	I1212 01:24:12.556814  226740 pod_ready.go:86] duration metric: took 6.008116927s for pod "coredns-66bc5c9577-9hm47" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:12.560897  226740 pod_ready.go:83] waiting for pod "etcd-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:24:14.568378  226740 pod_ready.go:104] pod "etcd-test-preload-209368" is not "Ready", error: <nil>
	I1212 01:24:16.066932  226740 pod_ready.go:94] pod "etcd-test-preload-209368" is "Ready"
	I1212 01:24:16.066963  226740 pod_ready.go:86] duration metric: took 3.506036791s for pod "etcd-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.069875  226740 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.075808  226740 pod_ready.go:94] pod "kube-apiserver-test-preload-209368" is "Ready"
	I1212 01:24:16.075856  226740 pod_ready.go:86] duration metric: took 5.939107ms for pod "kube-apiserver-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.079203  226740 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.084908  226740 pod_ready.go:94] pod "kube-controller-manager-test-preload-209368" is "Ready"
	I1212 01:24:16.084947  226740 pod_ready.go:86] duration metric: took 5.704652ms for pod "kube-controller-manager-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.088085  226740 pod_ready.go:83] waiting for pod "kube-proxy-jqjgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.265415  226740 pod_ready.go:94] pod "kube-proxy-jqjgs" is "Ready"
	I1212 01:24:16.265455  226740 pod_ready.go:86] duration metric: took 177.330257ms for pod "kube-proxy-jqjgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:16.465104  226740 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:18.472526  226740 pod_ready.go:94] pod "kube-scheduler-test-preload-209368" is "Ready"
	I1212 01:24:18.472555  226740 pod_ready.go:86] duration metric: took 2.007414146s for pod "kube-scheduler-test-preload-209368" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:24:18.472567  226740 pod_ready.go:40] duration metric: took 11.948632171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:24:18.520197  226740 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 01:24:18.522137  226740 out.go:179] * Done! kubectl is now configured to use "test-preload-209368" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.357410292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765502659357385008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e725abe-9a26-4e28-ae44-9e4aa7bed8ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.358406730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30608dd8-c712-4eec-872d-096890634099 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.358589957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30608dd8-c712-4eec-872d-096890634099 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.358779867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4ad2a3148e3fcbf6b1ded0fe351d7998ad15ce286fab558f5bc874459057a9,PodSandboxId:a51087567544a59b171bf05d1be00904079e9670d0069521a342b0568fe666ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765502648382236874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9hm47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96faeeac-4463-44c8-95a9-9f9bc62676b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b05ab1908fab0db3076bba56adeae9334a46c5f29f6c91e18745dc00a1ed212,PodSandboxId:9544c2cf2a8c15d42b137156a1c99328aa9265011d2e82f9661544db65630882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765502644714743016,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqjgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77693a9-f717-4a84-942a-62423b4bd20c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6407b3fa90fc4a53166a81676b14746f165e5ecf8d7526716f1b7b5529c4d96b,PodSandboxId:dec64e911bf35e4029a31529d94e4d30bc3881f4f229d7b7acceacbb6034e40d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765502644693152068,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775ef14afbe499cadfb0ba8c03b932c1c6dc28796de56f40bae04bcac129bd8d,PodSandboxId:e2228dcbf043f8246ccd41d97fa25656902848857b2871b54bffabfeae3f11bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765502640127909910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce80e222ea8ef0463905c31a714245fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23adfee117137d72e22a2f4214c9fb9db9fe6180f576f6102dd9004aa550c69d,PodSandboxId:21b2585eb01feeb8dadfaa48433daffda55808b901aba0d04a95e2b3504c449c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765502640146708405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24af99637bd2cb8ebf48e56c1713be77,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f65a65d8424a0f9aa8b8568cff0672c62b2fdcd25a9a3d9c178af8ed8005ca,PodSandboxId:2eb30c13238c59847dbf2b6267f725fa617ad0e2fc3553622631feddc34245b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765502640110098003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1f31b3bc0e6b77b2af6e23c34365a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb782b5eb19b20319e65b376d48d4c20ac7800962bb0e7554cc64d51c11c575,PodSandboxId:503cbc83ab423d3ebf26f6577acb78eb3999a7964400d81f413d9cf4b49b1c82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765502640031236062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ee2ad5096d7c2c2fef454233e3d8b2,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30608dd8-c712-4eec-872d-096890634099 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.398715549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99dc4603-5ce2-40c1-b784-ed5e25483351 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.398867752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99dc4603-5ce2-40c1-b784-ed5e25483351 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.400488903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df977620-0d76-4c96-a827-9a45e94cce84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.400947566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765502659400917294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df977620-0d76-4c96-a827-9a45e94cce84 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.402147037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7195ba54-5468-4d9b-acb7-96aa475e9a24 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.402221112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7195ba54-5468-4d9b-acb7-96aa475e9a24 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.402412490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4ad2a3148e3fcbf6b1ded0fe351d7998ad15ce286fab558f5bc874459057a9,PodSandboxId:a51087567544a59b171bf05d1be00904079e9670d0069521a342b0568fe666ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765502648382236874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9hm47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96faeeac-4463-44c8-95a9-9f9bc62676b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b05ab1908fab0db3076bba56adeae9334a46c5f29f6c91e18745dc00a1ed212,PodSandboxId:9544c2cf2a8c15d42b137156a1c99328aa9265011d2e82f9661544db65630882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765502644714743016,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqjgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77693a9-f717-4a84-942a-62423b4bd20c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6407b3fa90fc4a53166a81676b14746f165e5ecf8d7526716f1b7b5529c4d96b,PodSandboxId:dec64e911bf35e4029a31529d94e4d30bc3881f4f229d7b7acceacbb6034e40d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765502644693152068,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775ef14afbe499cadfb0ba8c03b932c1c6dc28796de56f40bae04bcac129bd8d,PodSandboxId:e2228dcbf043f8246ccd41d97fa25656902848857b2871b54bffabfeae3f11bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765502640127909910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce80e222ea8ef0463905c31a714245fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23adfee117137d72e22a2f4214c9fb9db9fe6180f576f6102dd9004aa550c69d,PodSandboxId:21b2585eb01feeb8dadfaa48433daffda55808b901aba0d04a95e2b3504c449c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765502640146708405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24af99637bd2cb8ebf48e56c1713be77,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f65a65d8424a0f9aa8b8568cff0672c62b2fdcd25a9a3d9c178af8ed8005ca,PodSandboxId:2eb30c13238c59847dbf2b6267f725fa617ad0e2fc3553622631feddc34245b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765502640110098003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1f31b3bc0e6b77b2af6e23c34365a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb782b5eb19b20319e65b376d48d4c20ac7800962bb0e7554cc64d51c11c575,PodSandboxId:503cbc83ab423d3ebf26f6577acb78eb3999a7964400d81f413d9cf4b49b1c82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765502640031236062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ee2ad5096d7c2c2fef454233e3d8b2,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7195ba54-5468-4d9b-acb7-96aa475e9a24 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.440402427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b6322be-574c-4e4b-a96b-656b9090dbbd name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.440497569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b6322be-574c-4e4b-a96b-656b9090dbbd name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.441997463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26aaea96-a1d3-4607-9358-48199945d055 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.442436537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765502659442412286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26aaea96-a1d3-4607-9358-48199945d055 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.443674880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=012b6cfa-b01d-450d-b445-47883c950542 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.443724560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=012b6cfa-b01d-450d-b445-47883c950542 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.444049505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4ad2a3148e3fcbf6b1ded0fe351d7998ad15ce286fab558f5bc874459057a9,PodSandboxId:a51087567544a59b171bf05d1be00904079e9670d0069521a342b0568fe666ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765502648382236874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9hm47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96faeeac-4463-44c8-95a9-9f9bc62676b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b05ab1908fab0db3076bba56adeae9334a46c5f29f6c91e18745dc00a1ed212,PodSandboxId:9544c2cf2a8c15d42b137156a1c99328aa9265011d2e82f9661544db65630882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765502644714743016,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqjgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77693a9-f717-4a84-942a-62423b4bd20c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6407b3fa90fc4a53166a81676b14746f165e5ecf8d7526716f1b7b5529c4d96b,PodSandboxId:dec64e911bf35e4029a31529d94e4d30bc3881f4f229d7b7acceacbb6034e40d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765502644693152068,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775ef14afbe499cadfb0ba8c03b932c1c6dc28796de56f40bae04bcac129bd8d,PodSandboxId:e2228dcbf043f8246ccd41d97fa25656902848857b2871b54bffabfeae3f11bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765502640127909910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce80e222ea8ef0463905c31a714245fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23adfee117137d72e22a2f4214c9fb9db9fe6180f576f6102dd9004aa550c69d,PodSandboxId:21b2585eb01feeb8dadfaa48433daffda55808b901aba0d04a95e2b3504c449c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765502640146708405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24af99637bd2cb8ebf48e56c1713be77,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f65a65d8424a0f9aa8b8568cff0672c62b2fdcd25a9a3d9c178af8ed8005ca,PodSandboxId:2eb30c13238c59847dbf2b6267f725fa617ad0e2fc3553622631feddc34245b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765502640110098003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1f31b3bc0e6b77b2af6e23c34365a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb782b5eb19b20319e65b376d48d4c20ac7800962bb0e7554cc64d51c11c575,PodSandboxId:503cbc83ab423d3ebf26f6577acb78eb3999a7964400d81f413d9cf4b49b1c82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765502640031236062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ee2ad5096d7c2c2fef454233e3d8b2,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=012b6cfa-b01d-450d-b445-47883c950542 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.478484138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6765ed8-60b5-4635-ad54-7bd8a37e80a6 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.478582855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6765ed8-60b5-4635-ad54-7bd8a37e80a6 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.481324582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e830871d-9e7c-4ef8-83bf-de95876c10de name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.482333570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765502659482231196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e830871d-9e7c-4ef8-83bf-de95876c10de name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.483689073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=874065ad-97d1-41fd-beaf-adce3d8fb8b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.483743586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=874065ad-97d1-41fd-beaf-adce3d8fb8b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:19 test-preload-209368 crio[843]: time="2025-12-12 01:24:19.484392724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4ad2a3148e3fcbf6b1ded0fe351d7998ad15ce286fab558f5bc874459057a9,PodSandboxId:a51087567544a59b171bf05d1be00904079e9670d0069521a342b0568fe666ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765502648382236874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9hm47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96faeeac-4463-44c8-95a9-9f9bc62676b9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b05ab1908fab0db3076bba56adeae9334a46c5f29f6c91e18745dc00a1ed212,PodSandboxId:9544c2cf2a8c15d42b137156a1c99328aa9265011d2e82f9661544db65630882,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765502644714743016,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqjgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77693a9-f717-4a84-942a-62423b4bd20c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6407b3fa90fc4a53166a81676b14746f165e5ecf8d7526716f1b7b5529c4d96b,PodSandboxId:dec64e911bf35e4029a31529d94e4d30bc3881f4f229d7b7acceacbb6034e40d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765502644693152068,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775ef14afbe499cadfb0ba8c03b932c1c6dc28796de56f40bae04bcac129bd8d,PodSandboxId:e2228dcbf043f8246ccd41d97fa25656902848857b2871b54bffabfeae3f11bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765502640127909910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce80e222ea8ef0463905c31a714245fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23adfee117137d72e22a2f4214c9fb9db9fe6180f576f6102dd9004aa550c69d,PodSandboxId:21b2585eb01feeb8dadfaa48433daffda55808b901aba0d04a95e2b3504c449c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765502640146708405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24af99637bd2cb8ebf48e56c1713be77,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39f65a65d8424a0f9aa8b8568cff0672c62b2fdcd25a9a3d9c178af8ed8005ca,PodSandboxId:2eb30c13238c59847dbf2b6267f725fa617ad0e2fc3553622631feddc34245b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765502640110098003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1f31b3bc0e6b77b2af6e23c34365a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb782b5eb19b20319e65b376d48d4c20ac7800962bb0e7554cc64d51c11c575,PodSandboxId:503cbc83ab423d3ebf26f6577acb78eb3999a7964400d81f413d9cf4b49b1c82,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765502640031236062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-209368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ee2ad5096d7c2c2fef454233e3d8b2,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=874065ad-97d1-41fd-beaf-adce3d8fb8b4 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	cd4ad2a3148e3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   1                   a51087567544a       coredns-66bc5c9577-9hm47                      kube-system
	4b05ab1908fab       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   14 seconds ago      Running             kube-proxy                1                   9544c2cf2a8c1       kube-proxy-jqjgs                              kube-system
	6407b3fa90fc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   dec64e911bf35       storage-provisioner                           kube-system
	23adfee117137       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago      Running             kube-scheduler            1                   21b2585eb01fe       kube-scheduler-test-preload-209368            kube-system
	775ef14afbe49       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      1                   e2228dcbf043f       etcd-test-preload-209368                      kube-system
	39f65a65d8424       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago      Running             kube-controller-manager   1                   2eb30c13238c5       kube-controller-manager-test-preload-209368   kube-system
	bcb782b5eb19b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            1                   503cbc83ab423       kube-apiserver-test-preload-209368            kube-system
	
	
	==> coredns [cd4ad2a3148e3fcbf6b1ded0fe351d7998ad15ce286fab558f5bc874459057a9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36031 - 26468 "HINFO IN 7227581794337784368.1568074637850132902. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.440725551s
	
	
	==> describe nodes <==
	Name:               test-preload-209368
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-209368
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=test-preload-209368
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T01_22_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 01:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-209368
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 01:24:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 01:24:05 +0000   Fri, 12 Dec 2025 01:22:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 01:24:05 +0000   Fri, 12 Dec 2025 01:22:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 01:24:05 +0000   Fri, 12 Dec 2025 01:22:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 01:24:05 +0000   Fri, 12 Dec 2025 01:24:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    test-preload-209368
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ee160f3cc5844cca2747490cbfd7c3b
	  System UUID:                3ee160f3-cc58-44cc-a274-7490cbfd7c3b
	  Boot ID:                    2441114a-413b-41e2-9683-42060eabffa3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9hm47                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     99s
	  kube-system                 etcd-test-preload-209368                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-209368             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-209368    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-jqjgs                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-test-preload-209368             100m (5%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 97s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 106s               kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s               kubelet          Node test-preload-209368 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s               kubelet          Node test-preload-209368 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s               kubelet          Node test-preload-209368 status is now: NodeHasSufficientPID
	  Normal   NodeReady                105s               kubelet          Node test-preload-209368 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  105s               kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           100s               node-controller  Node test-preload-209368 event: Registered Node test-preload-209368 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-209368 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-209368 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-209368 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-209368 has been rebooted, boot id: 2441114a-413b-41e2-9683-42060eabffa3
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-209368 event: Registered Node test-preload-209368 in Controller
	
	
	==> dmesg <==
	[Dec12 01:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001790] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006999] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.025606] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.095204] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.110180] kauditd_printk_skb: 102 callbacks suppressed
	[Dec12 01:24] kauditd_printk_skb: 168 callbacks suppressed
	[  +3.960205] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [775ef14afbe499cadfb0ba8c03b932c1c6dc28796de56f40bae04bcac129bd8d] <==
	{"level":"warn","ts":"2025-12-12T01:24:02.010426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.028267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.040295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.058512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.077777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.103499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.115688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.127038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.143471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.159352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.176024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.183604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.195092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.217540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.220681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.234834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.246474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.257618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.268725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.286383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.303629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.316688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.327908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.346302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:24:02.444567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37044","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:24:19 up 0 min,  0 users,  load average: 1.13, 0.31, 0.11
	Linux test-preload-209368 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [bcb782b5eb19b20319e65b376d48d4c20ac7800962bb0e7554cc64d51c11c575] <==
	I1212 01:24:03.369268       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 01:24:03.359127       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 01:24:03.359116       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 01:24:03.373874       1 aggregator.go:171] initial CRD sync complete...
	I1212 01:24:03.373889       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 01:24:03.373898       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 01:24:03.373906       1 cache.go:39] Caches are synced for autoregister controller
	I1212 01:24:03.378903       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 01:24:03.378953       1 policy_source.go:240] refreshing policies
	I1212 01:24:03.380720       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 01:24:03.400714       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 01:24:03.400757       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 01:24:03.401714       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 01:24:03.401950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 01:24:03.410389       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 01:24:03.415235       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 01:24:04.105184       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 01:24:04.348848       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 01:24:05.078008       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 01:24:05.130471       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 01:24:05.178665       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 01:24:05.204566       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 01:24:06.683449       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 01:24:06.884681       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 01:24:06.941718       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [39f65a65d8424a0f9aa8b8568cff0672c62b2fdcd25a9a3d9c178af8ed8005ca] <==
	I1212 01:24:06.616593       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:24:06.616598       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:24:06.621060       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 01:24:06.626360       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 01:24:06.629609       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 01:24:06.629885       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 01:24:06.629886       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 01:24:06.630060       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 01:24:06.632622       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 01:24:06.633928       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 01:24:06.634058       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:24:06.634382       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 01:24:06.634995       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 01:24:06.635595       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:24:06.637852       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 01:24:06.640871       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 01:24:06.647386       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 01:24:06.653762       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 01:24:06.655222       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:24:06.663862       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 01:24:06.669199       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 01:24:06.669367       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 01:24:06.669465       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-209368"
	I1212 01:24:06.669543       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 01:24:06.670774       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [4b05ab1908fab0db3076bba56adeae9334a46c5f29f6c91e18745dc00a1ed212] <==
	I1212 01:24:04.982510       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:24:05.086715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:24:05.086773       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.115"]
	E1212 01:24:05.086891       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:24:05.165888       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 01:24:05.166963       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:24:05.167024       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:24:05.196922       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:24:05.199146       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:24:05.199191       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:24:05.211728       1 config.go:309] "Starting node config controller"
	I1212 01:24:05.211762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:24:05.211770       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:24:05.218679       1 config.go:200] "Starting service config controller"
	I1212 01:24:05.218710       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:24:05.218737       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:24:05.218741       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:24:05.218752       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:24:05.218755       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:24:05.319669       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:24:05.323841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:24:05.323872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [23adfee117137d72e22a2f4214c9fb9db9fe6180f576f6102dd9004aa550c69d] <==
	I1212 01:24:01.385734       1 serving.go:386] Generated self-signed cert in-memory
	I1212 01:24:03.613099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 01:24:03.613155       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:24:03.621588       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1212 01:24:03.621643       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1212 01:24:03.621686       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:24:03.621693       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:24:03.621704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 01:24:03.621710       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 01:24:03.621867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 01:24:03.621935       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 01:24:03.723028       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 01:24:03.723100       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1212 01:24:03.723137       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 01:24:03 test-preload-209368 kubelet[1197]: E1212 01:24:03.448538    1197 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-209368\" already exists" pod="kube-system/etcd-test-preload-209368"
	Dec 12 01:24:03 test-preload-209368 kubelet[1197]: I1212 01:24:03.458460    1197 kubelet_node_status.go:124] "Node was previously registered" node="test-preload-209368"
	Dec 12 01:24:03 test-preload-209368 kubelet[1197]: I1212 01:24:03.458597    1197 kubelet_node_status.go:78] "Successfully registered node" node="test-preload-209368"
	Dec 12 01:24:03 test-preload-209368 kubelet[1197]: I1212 01:24:03.458627    1197 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 01:24:03 test-preload-209368 kubelet[1197]: I1212 01:24:03.461262    1197 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 01:24:03 test-preload-209368 kubelet[1197]: I1212 01:24:03.463015    1197 setters.go:543] "Node became not ready" node="test-preload-209368" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T01:24:03Z","lastTransitionTime":"2025-12-12T01:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: I1212 01:24:04.214240    1197 apiserver.go:52] "Watching apiserver"
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: E1212 01:24:04.226524    1197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-9hm47" podUID="96faeeac-4463-44c8-95a9-9f9bc62676b9"
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: I1212 01:24:04.246694    1197 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: I1212 01:24:04.342971    1197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b77693a9-f717-4a84-942a-62423b4bd20c-lib-modules\") pod \"kube-proxy-jqjgs\" (UID: \"b77693a9-f717-4a84-942a-62423b4bd20c\") " pod="kube-system/kube-proxy-jqjgs"
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: I1212 01:24:04.343074    1197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6-tmp\") pod \"storage-provisioner\" (UID: \"cc7412bb-83f8-4fa7-ab3a-11c78e93a7f6\") " pod="kube-system/storage-provisioner"
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: I1212 01:24:04.343105    1197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b77693a9-f717-4a84-942a-62423b4bd20c-xtables-lock\") pod \"kube-proxy-jqjgs\" (UID: \"b77693a9-f717-4a84-942a-62423b4bd20c\") " pod="kube-system/kube-proxy-jqjgs"
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: E1212 01:24:04.343582    1197 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: E1212 01:24:04.343650    1197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96faeeac-4463-44c8-95a9-9f9bc62676b9-config-volume podName:96faeeac-4463-44c8-95a9-9f9bc62676b9 nodeName:}" failed. No retries permitted until 2025-12-12 01:24:04.843631292 +0000 UTC m=+6.729362107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/96faeeac-4463-44c8-95a9-9f9bc62676b9-config-volume") pod "coredns-66bc5c9577-9hm47" (UID: "96faeeac-4463-44c8-95a9-9f9bc62676b9") : object "kube-system"/"coredns" not registered
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: E1212 01:24:04.848415    1197 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 01:24:04 test-preload-209368 kubelet[1197]: E1212 01:24:04.848478    1197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96faeeac-4463-44c8-95a9-9f9bc62676b9-config-volume podName:96faeeac-4463-44c8-95a9-9f9bc62676b9 nodeName:}" failed. No retries permitted until 2025-12-12 01:24:05.848465308 +0000 UTC m=+7.734196124 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/96faeeac-4463-44c8-95a9-9f9bc62676b9-config-volume") pod "coredns-66bc5c9577-9hm47" (UID: "96faeeac-4463-44c8-95a9-9f9bc62676b9") : object "kube-system"/"coredns" not registered
	Dec 12 01:24:05 test-preload-209368 kubelet[1197]: I1212 01:24:05.381709    1197 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 01:24:05 test-preload-209368 kubelet[1197]: E1212 01:24:05.861628    1197 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 01:24:05 test-preload-209368 kubelet[1197]: E1212 01:24:05.861729    1197 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96faeeac-4463-44c8-95a9-9f9bc62676b9-config-volume podName:96faeeac-4463-44c8-95a9-9f9bc62676b9 nodeName:}" failed. No retries permitted until 2025-12-12 01:24:07.861713883 +0000 UTC m=+9.747444700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/96faeeac-4463-44c8-95a9-9f9bc62676b9-config-volume") pod "coredns-66bc5c9577-9hm47" (UID: "96faeeac-4463-44c8-95a9-9f9bc62676b9") : object "kube-system"/"coredns" not registered
	Dec 12 01:24:08 test-preload-209368 kubelet[1197]: E1212 01:24:08.322691    1197 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765502648318930356 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 12 01:24:08 test-preload-209368 kubelet[1197]: E1212 01:24:08.322717    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765502648318930356 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 12 01:24:10 test-preload-209368 kubelet[1197]: I1212 01:24:10.493250    1197 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 01:24:12 test-preload-209368 kubelet[1197]: I1212 01:24:12.094983    1197 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 01:24:18 test-preload-209368 kubelet[1197]: E1212 01:24:18.326326    1197 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765502658325310984 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 12 01:24:18 test-preload-209368 kubelet[1197]: E1212 01:24:18.326348    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765502658325310984 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [6407b3fa90fc4a53166a81676b14746f165e5ecf8d7526716f1b7b5529c4d96b] <==
	I1212 01:24:04.834753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-209368 -n test-preload-209368
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-209368 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-209368" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-209368
--- FAIL: TestPreload (160.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-321955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-321955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.600693095s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-321955] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-321955" primary control-plane node in "pause-321955" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-321955" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:30:40.549241  233235 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:30:40.549419  233235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:30:40.549428  233235 out.go:374] Setting ErrFile to fd 2...
	I1212 01:30:40.549437  233235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:30:40.549829  233235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:30:40.550391  233235 out.go:368] Setting JSON to false
	I1212 01:30:40.551733  233235 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25985,"bootTime":1765477056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:30:40.551856  233235 start.go:143] virtualization: kvm guest
	I1212 01:30:40.657087  233235 out.go:179] * [pause-321955] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 01:30:40.691127  233235 notify.go:221] Checking for updates...
	I1212 01:30:40.691257  233235 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:30:40.696545  233235 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:30:40.698569  233235 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:30:40.701743  233235 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 01:30:40.704007  233235 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:30:40.705633  233235 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:30:40.707776  233235 config.go:182] Loaded profile config "pause-321955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:30:40.708520  233235 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:30:40.768725  233235 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 01:30:40.839688  233235 start.go:309] selected driver: kvm2
	I1212 01:30:40.839726  233235 start.go:927] validating driver "kvm2" against &{Name:pause-321955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-321955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:30:40.839987  233235 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:30:40.841503  233235 cni.go:84] Creating CNI manager for ""
	I1212 01:30:40.841593  233235 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:30:40.841656  233235 start.go:353] cluster config:
	{Name:pause-321955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-321955 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:30:40.841877  233235 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:30:40.880658  233235 out.go:179] * Starting "pause-321955" primary control-plane node in "pause-321955" cluster
	I1212 01:30:40.893613  233235 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:30:40.893725  233235 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 01:30:40.893752  233235 cache.go:65] Caching tarball of preloaded images
	I1212 01:30:40.893909  233235 preload.go:238] Found /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 01:30:40.893930  233235 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 01:30:40.894180  233235 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/config.json ...
	I1212 01:30:40.894524  233235 start.go:360] acquireMachinesLock for pause-321955: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:30:56.831069  233235 start.go:364] duration metric: took 15.936474938s to acquireMachinesLock for "pause-321955"
	I1212 01:30:56.831145  233235 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:30:56.831155  233235 fix.go:54] fixHost starting: 
	I1212 01:30:56.834007  233235 fix.go:112] recreateIfNeeded on pause-321955: state=Running err=<nil>
	W1212 01:30:56.834064  233235 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:30:56.836367  233235 out.go:252] * Updating the running kvm2 "pause-321955" VM ...
	I1212 01:30:56.836412  233235 machine.go:94] provisionDockerMachine start ...
	I1212 01:30:56.842139  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:56.842886  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:56.842937  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:56.843443  233235 main.go:143] libmachine: Using SSH client type: native
	I1212 01:30:56.843823  233235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I1212 01:30:56.843839  233235 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:30:56.980764  233235 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-321955
	
	I1212 01:30:56.980806  233235 buildroot.go:166] provisioning hostname "pause-321955"
	I1212 01:30:56.984861  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:56.985489  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:56.985521  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:56.985819  233235 main.go:143] libmachine: Using SSH client type: native
	I1212 01:30:56.986106  233235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I1212 01:30:56.986124  233235 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-321955 && echo "pause-321955" | sudo tee /etc/hostname
	I1212 01:30:57.128160  233235 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-321955
	
	I1212 01:30:57.132211  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.132718  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:57.132746  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.132976  233235 main.go:143] libmachine: Using SSH client type: native
	I1212 01:30:57.133238  233235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I1212 01:30:57.133254  233235 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-321955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-321955/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-321955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:30:57.250032  233235 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:30:57.250065  233235 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22101-186349/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-186349/.minikube}
	I1212 01:30:57.250093  233235 buildroot.go:174] setting up certificates
	I1212 01:30:57.250109  233235 provision.go:84] configureAuth start
	I1212 01:30:57.253419  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.254107  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:57.254151  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.257339  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.257856  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:57.257883  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.258061  233235 provision.go:143] copyHostCerts
	I1212 01:30:57.258136  233235 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem, removing ...
	I1212 01:30:57.258151  233235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem
	I1212 01:30:57.258228  233235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/key.pem (1675 bytes)
	I1212 01:30:57.258349  233235 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem, removing ...
	I1212 01:30:57.258362  233235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem
	I1212 01:30:57.258394  233235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/ca.pem (1082 bytes)
	I1212 01:30:57.258446  233235 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem, removing ...
	I1212 01:30:57.258454  233235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem
	I1212 01:30:57.258508  233235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-186349/.minikube/cert.pem (1123 bytes)
	I1212 01:30:57.258566  233235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem org=jenkins.pause-321955 san=[127.0.0.1 192.168.50.238 localhost minikube pause-321955]
	I1212 01:30:57.430587  233235 provision.go:177] copyRemoteCerts
	I1212 01:30:57.430656  233235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:30:57.433318  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.433753  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:57.433791  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.433969  233235 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/pause-321955/id_rsa Username:docker}
	I1212 01:30:57.535271  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:30:57.578830  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 01:30:57.618729  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:30:57.663943  233235 provision.go:87] duration metric: took 413.802653ms to configureAuth
	I1212 01:30:57.663973  233235 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:30:57.664241  233235 config.go:182] Loaded profile config "pause-321955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:30:57.667898  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.668333  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:30:57.668363  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:30:57.668633  233235 main.go:143] libmachine: Using SSH client type: native
	I1212 01:30:57.668875  233235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I1212 01:30:57.668901  233235 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:31:03.380567  233235 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:31:03.380606  233235 machine.go:97] duration metric: took 6.544183047s to provisionDockerMachine
	I1212 01:31:03.380622  233235 start.go:293] postStartSetup for "pause-321955" (driver="kvm2")
	I1212 01:31:03.380637  233235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:31:03.380733  233235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:31:03.385674  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.386316  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:31:03.386360  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.386655  233235 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/pause-321955/id_rsa Username:docker}
	I1212 01:31:03.481256  233235 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:31:03.488550  233235 info.go:137] Remote host: Buildroot 2025.02
	I1212 01:31:03.488586  233235 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/addons for local assets ...
	I1212 01:31:03.488679  233235 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-186349/.minikube/files for local assets ...
	I1212 01:31:03.488815  233235 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem -> 1902722.pem in /etc/ssl/certs
	I1212 01:31:03.488970  233235 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:31:03.508204  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem --> /etc/ssl/certs/1902722.pem (1708 bytes)
	I1212 01:31:03.547791  233235 start.go:296] duration metric: took 167.145872ms for postStartSetup
	I1212 01:31:03.547849  233235 fix.go:56] duration metric: took 6.716695217s for fixHost
	I1212 01:31:03.551568  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.552121  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:31:03.552162  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.552359  233235 main.go:143] libmachine: Using SSH client type: native
	I1212 01:31:03.552733  233235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I1212 01:31:03.552759  233235 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:31:03.670776  233235 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765503063.665054475
	
	I1212 01:31:03.670809  233235 fix.go:216] guest clock: 1765503063.665054475
	I1212 01:31:03.670818  233235 fix.go:229] Guest: 2025-12-12 01:31:03.665054475 +0000 UTC Remote: 2025-12-12 01:31:03.547855298 +0000 UTC m=+23.065784974 (delta=117.199177ms)
	I1212 01:31:03.670838  233235 fix.go:200] guest clock delta is within tolerance: 117.199177ms
	I1212 01:31:03.670845  233235 start.go:83] releasing machines lock for "pause-321955", held for 6.83972707s
	I1212 01:31:03.677069  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.678033  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:31:03.678072  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.679013  233235 ssh_runner.go:195] Run: cat /version.json
	I1212 01:31:03.679269  233235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:31:03.684126  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.684742  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.684809  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:31:03.684847  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.685128  233235 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/pause-321955/id_rsa Username:docker}
	I1212 01:31:03.686033  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:31:03.687536  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:03.687831  233235 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/pause-321955/id_rsa Username:docker}
	I1212 01:31:03.806148  233235 ssh_runner.go:195] Run: systemctl --version
	I1212 01:31:03.817054  233235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:31:04.018545  233235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:31:04.031504  233235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:31:04.031681  233235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:31:04.056080  233235 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:31:04.056242  233235 start.go:496] detecting cgroup driver to use...
	I1212 01:31:04.056521  233235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:31:04.091218  233235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:31:04.116122  233235 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:31:04.116215  233235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:31:04.144630  233235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:31:04.166747  233235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:31:04.411833  233235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:31:04.624667  233235 docker.go:234] disabling docker service ...
	I1212 01:31:04.624752  233235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:31:04.674031  233235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:31:04.694274  233235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:31:04.979435  233235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:31:05.163865  233235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:31:05.185836  233235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:31:05.218451  233235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 01:31:05.218552  233235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.235283  233235 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:31:05.235389  233235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.254205  233235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.272097  233235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.288529  233235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:31:05.308337  233235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.325940  233235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.343834  233235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:31:05.358739  233235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:31:05.372231  233235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:31:05.386801  233235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:31:05.570294  233235 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:31:06.066799  233235 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:31:06.066899  233235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:31:06.073438  233235 start.go:564] Will wait 60s for crictl version
	I1212 01:31:06.073532  233235 ssh_runner.go:195] Run: which crictl
	I1212 01:31:06.078644  233235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:31:06.114878  233235 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:31:06.114995  233235 ssh_runner.go:195] Run: crio --version
	I1212 01:31:06.147200  233235 ssh_runner.go:195] Run: crio --version
	I1212 01:31:06.188231  233235 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 01:31:06.192783  233235 main.go:143] libmachine: domain pause-321955 has defined MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:06.193281  233235 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5c:2b:e9", ip: ""} in network mk-pause-321955: {Iface:virbr2 ExpiryTime:2025-12-12 02:29:30 +0000 UTC Type:0 Mac:52:54:00:5c:2b:e9 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:pause-321955 Clientid:01:52:54:00:5c:2b:e9}
	I1212 01:31:06.193323  233235 main.go:143] libmachine: domain pause-321955 has defined IP address 192.168.50.238 and MAC address 52:54:00:5c:2b:e9 in network mk-pause-321955
	I1212 01:31:06.193620  233235 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:31:06.199893  233235 kubeadm.go:884] updating cluster {Name:pause-321955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-321955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:31:06.200109  233235 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:31:06.200169  233235 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:31:06.245913  233235 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:31:06.245948  233235 crio.go:433] Images already preloaded, skipping extraction
	I1212 01:31:06.246016  233235 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:31:06.286705  233235 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:31:06.286738  233235 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:31:06.286754  233235 kubeadm.go:935] updating node { 192.168.50.238 8443 v1.34.2 crio true true} ...
	I1212 01:31:06.286896  233235 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-321955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-321955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:31:06.286996  233235 ssh_runner.go:195] Run: crio config
	I1212 01:31:06.416550  233235 cni.go:84] Creating CNI manager for ""
	I1212 01:31:06.416587  233235 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:31:06.416610  233235 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:31:06.416644  233235 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-321955 NodeName:pause-321955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:31:06.416883  233235 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-321955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.238"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:31:06.417009  233235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:31:06.449374  233235 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:31:06.449484  233235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:31:06.481907  233235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 01:31:06.533808  233235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:31:06.598871  233235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1212 01:31:06.712319  233235 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I1212 01:31:06.719063  233235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:31:07.040610  233235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:31:07.087916  233235 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955 for IP: 192.168.50.238
	I1212 01:31:07.087949  233235 certs.go:195] generating shared ca certs ...
	I1212 01:31:07.087972  233235 certs.go:227] acquiring lock for ca certs: {Name:mkdc58adfd2cc299a76aeec81ac0d7f7d2a38e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:31:07.088222  233235 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key
	I1212 01:31:07.088301  233235 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key
	I1212 01:31:07.088318  233235 certs.go:257] generating profile certs ...
	I1212 01:31:07.088494  233235 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.key
	I1212 01:31:07.088581  233235 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/apiserver.key.0e385ef6
	I1212 01:31:07.088636  233235 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/proxy-client.key
	I1212 01:31:07.088826  233235 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/190272.pem (1338 bytes)
	W1212 01:31:07.088878  233235 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-186349/.minikube/certs/190272_empty.pem, impossibly tiny 0 bytes
	I1212 01:31:07.088896  233235 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:31:07.088955  233235 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:31:07.088994  233235 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:31:07.089039  233235 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/certs/key.pem (1675 bytes)
	I1212 01:31:07.089114  233235 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem (1708 bytes)
	I1212 01:31:07.090181  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:31:07.161528  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:31:07.254389  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:31:07.358208  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 01:31:07.457770  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 01:31:07.556634  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:31:07.674846  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:31:07.779408  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:31:07.863769  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/ssl/certs/1902722.pem --> /usr/share/ca-certificates/1902722.pem (1708 bytes)
	I1212 01:31:08.003703  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:31:08.082434  233235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-186349/.minikube/certs/190272.pem --> /usr/share/ca-certificates/190272.pem (1338 bytes)
	I1212 01:31:08.152970  233235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:31:08.209370  233235 ssh_runner.go:195] Run: openssl version
	I1212 01:31:08.224360  233235 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/190272.pem
	I1212 01:31:08.247807  233235 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/190272.pem /etc/ssl/certs/190272.pem
	I1212 01:31:08.267813  233235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/190272.pem
	I1212 01:31:08.276346  233235 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:27 /usr/share/ca-certificates/190272.pem
	I1212 01:31:08.276453  233235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/190272.pem
	I1212 01:31:08.287330  233235 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:31:08.306700  233235 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1902722.pem
	I1212 01:31:08.329550  233235 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1902722.pem /etc/ssl/certs/1902722.pem
	I1212 01:31:08.350800  233235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902722.pem
	I1212 01:31:08.362239  233235 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:27 /usr/share/ca-certificates/1902722.pem
	I1212 01:31:08.362311  233235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902722.pem
	I1212 01:31:08.380180  233235 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:31:08.411933  233235 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:31:08.449523  233235 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:31:08.494871  233235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:31:08.518901  233235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:31:08.518990  233235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:31:08.533256  233235 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:31:08.591581  233235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:31:08.614911  233235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:31:08.645551  233235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:31:08.691183  233235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:31:08.746964  233235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:31:08.768375  233235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:31:08.800451  233235 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:31:08.834456  233235 kubeadm.go:401] StartCluster: {Name:pause-321955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-321955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:31:08.834652  233235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:31:08.834745  233235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:31:08.941906  233235 cri.go:89] found id: "8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7"
	I1212 01:31:08.941938  233235 cri.go:89] found id: "3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155"
	I1212 01:31:08.941945  233235 cri.go:89] found id: "1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f"
	I1212 01:31:08.941951  233235 cri.go:89] found id: "3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba"
	I1212 01:31:08.941956  233235 cri.go:89] found id: "f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0"
	I1212 01:31:08.941962  233235 cri.go:89] found id: "489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274"
	I1212 01:31:08.941967  233235 cri.go:89] found id: "f8c79aa2095ac5971a2b851fa1b37236bf60b123ece7ca88aae412f3895c797f"
	I1212 01:31:08.941971  233235 cri.go:89] found id: "1e56080bfaddf23ab16011014dca4df160d90791159e94b822182e164f40affb"
	I1212 01:31:08.941976  233235 cri.go:89] found id: "ba55c031833a6c8196df5e34921cf978d1f8c90d8aeb4e6e55f3f33c99431814"
	I1212 01:31:08.941984  233235 cri.go:89] found id: "07eab4bdc509a78f7e43282272dc4578c699663161f110e7efe069796e78d08d"
	I1212 01:31:08.941988  233235 cri.go:89] found id: "2caaed4c2f76a5531c88754a898549561e957c76113542642509a9f7f135366c"
	I1212 01:31:08.941993  233235 cri.go:89] found id: "f2e3754c0f1b8ca5ef475fc070431714f8390011ee1788cd232a09eba471ce90"
	I1212 01:31:08.941999  233235 cri.go:89] found id: ""
	I1212 01:31:08.942065  233235 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-321955 -n pause-321955
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-321955 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-321955 logs -n 25: (1.519983487s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                        ARGS                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-028084 sudo cat /etc/docker/daemon.json                                                                   │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo docker system info                                                                            │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl status cri-docker --all --full --no-pager                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl cat cri-docker --no-pager                                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                      │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /usr/lib/systemd/system/cri-docker.service                                                │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cri-dockerd --version                                                                         │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl status containerd --all --full --no-pager                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl cat containerd --no-pager                                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /lib/systemd/system/containerd.service                                                    │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /etc/containerd/config.toml                                                               │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo containerd config dump                                                                        │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl status crio --all --full --no-pager                                                 │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl cat crio --no-pager                                                                 │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                       │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo crio config                                                                                   │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ delete  │ -p cilium-028084                                                                                                    │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:30 UTC │
	│ start   │ -p cert-expiration-809349 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                │ cert-expiration-809349 │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p pause-321955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                      │ pause-321955           │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:31 UTC │
	│ delete  │ -p NoKubernetes-985362                                                                                              │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │ 12 Dec 25 01:31 UTC │
	│ ssh     │ -p NoKubernetes-985362 sudo systemctl is-active --quiet service kubelet                                             │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p NoKubernetes-985362                                                                                              │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p NoKubernetes-985362 --driver=kvm2  --container-runtime=crio                                                      │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:31:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:31:41.818155  233926 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:31:41.818544  233926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:31:41.818551  233926 out.go:374] Setting ErrFile to fd 2...
	I1212 01:31:41.818556  233926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:31:41.818897  233926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:31:41.819611  233926 out.go:368] Setting JSON to false
	I1212 01:31:41.821041  233926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26046,"bootTime":1765477056,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:31:41.821115  233926 start.go:143] virtualization: kvm guest
	I1212 01:31:41.822920  233926 out.go:179] * [NoKubernetes-985362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 01:31:41.824537  233926 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:31:41.824553  233926 notify.go:221] Checking for updates...
	I1212 01:31:41.828796  233926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:31:41.830082  233926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:31:41.831953  233926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 01:31:41.833644  233926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:31:41.835166  233926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:31:41.837327  233926 config.go:182] Loaded profile config "NoKubernetes-985362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 01:31:41.838091  233926 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1212 01:31:41.838123  233926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:31:41.880844  233926 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 01:31:41.882184  233926 start.go:309] selected driver: kvm2
	I1212 01:31:41.882194  233926 start.go:927] validating driver "kvm2" against &{Name:NoKubernetes-985362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-985362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.62 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:31:41.882320  233926 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:31:41.883323  233926 cni.go:84] Creating CNI manager for ""
	I1212 01:31:41.883376  233926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:31:41.883415  233926 start.go:353] cluster config:
	{Name:NoKubernetes-985362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-985362 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.62 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:31:41.883622  233926 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:31:41.885759  233926 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-985362
	I1212 01:31:41.887090  233926 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1212 01:31:41.911634  233926 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1212 01:31:42.044641  233926 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 01:31:42.044811  233926 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/NoKubernetes-985362/config.json ...
	I1212 01:31:42.045129  233926 start.go:360] acquireMachinesLock for NoKubernetes-985362: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:31:42.045195  233926 start.go:364] duration metric: took 46.936µs to acquireMachinesLock for "NoKubernetes-985362"
	I1212 01:31:42.045211  233926 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:31:42.045217  233926 fix.go:54] fixHost starting: 
	I1212 01:31:42.047577  233926 fix.go:112] recreateIfNeeded on NoKubernetes-985362: state=Stopped err=<nil>
	W1212 01:31:42.047600  233926 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:31:38.387986  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:38.388019  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:38.502237  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:38.502285  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:41.101483  228989 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1212 01:31:41.102291  228989 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1212 01:31:41.102365  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:41.102423  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:41.146347  228989 cri.go:89] found id: "976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:41.146375  228989 cri.go:89] found id: ""
	I1212 01:31:41.146386  228989 logs.go:282] 1 containers: [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35]
	I1212 01:31:41.146450  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.151231  228989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:31:41.151301  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:41.197579  228989 cri.go:89] found id: "ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:41.197610  228989 cri.go:89] found id: ""
	I1212 01:31:41.197621  228989 logs.go:282] 1 containers: [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd]
	I1212 01:31:41.197697  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.202529  228989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:31:41.202615  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:41.246730  228989 cri.go:89] found id: "63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:41.246759  228989 cri.go:89] found id: ""
	I1212 01:31:41.246767  228989 logs.go:282] 1 containers: [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6]
	I1212 01:31:41.246823  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.251723  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:41.251812  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:41.297993  228989 cri.go:89] found id: "08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:41.298019  228989 cri.go:89] found id: "ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:41.298026  228989 cri.go:89] found id: ""
	I1212 01:31:41.298037  228989 logs.go:282] 2 containers: [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90]
	I1212 01:31:41.298107  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.304325  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.310621  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:41.310710  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:41.361133  228989 cri.go:89] found id: "d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:41.361166  228989 cri.go:89] found id: ""
	I1212 01:31:41.361177  228989 logs.go:282] 1 containers: [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3]
	I1212 01:31:41.361238  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.366204  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:41.366272  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:41.408421  228989 cri.go:89] found id: "084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:41.408450  228989 cri.go:89] found id: ""
	I1212 01:31:41.408473  228989 logs.go:282] 1 containers: [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72]
	I1212 01:31:41.408531  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.415361  228989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:41.415430  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:41.469627  228989 cri.go:89] found id: ""
	I1212 01:31:41.469652  228989 logs.go:282] 0 containers: []
	W1212 01:31:41.469659  228989 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:41.469666  228989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:31:41.469716  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:31:41.525627  228989 cri.go:89] found id: "275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:41.525662  228989 cri.go:89] found id: ""
	I1212 01:31:41.525675  228989 logs.go:282] 1 containers: [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75]
	I1212 01:31:41.525746  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.531922  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:41.531952  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:41.676750  228989 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:41.676800  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:41.773296  228989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:41.773323  228989 logs.go:123] Gathering logs for etcd [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd] ...
	I1212 01:31:41.773341  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:41.831636  228989 logs.go:123] Gathering logs for coredns [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6] ...
	I1212 01:31:41.831688  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:41.887852  228989 logs.go:123] Gathering logs for kube-scheduler [ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90] ...
	I1212 01:31:41.887878  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:41.931401  228989 logs.go:123] Gathering logs for kube-proxy [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3] ...
	I1212 01:31:41.931433  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:41.978985  228989 logs.go:123] Gathering logs for kube-controller-manager [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72] ...
	I1212 01:31:41.979027  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:42.026024  228989 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:31:42.026063  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:31:42.337568  228989 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:42.337607  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:42.353715  228989 logs.go:123] Gathering logs for kube-apiserver [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35] ...
	I1212 01:31:42.353756  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:42.409284  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:42.409325  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:42.510149  228989 logs.go:123] Gathering logs for storage-provisioner [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75] ...
	I1212 01:31:42.510196  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:42.552418  228989 logs.go:123] Gathering logs for container status ...
	I1212 01:31:42.552449  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:40.794709  233235 pod_ready.go:104] pod "coredns-66bc5c9577-sqrnk" is not "Ready", error: <nil>
	W1212 01:31:43.293063  233235 pod_ready.go:104] pod "coredns-66bc5c9577-sqrnk" is not "Ready", error: <nil>
	W1212 01:31:45.297176  233235 pod_ready.go:104] pod "coredns-66bc5c9577-sqrnk" is not "Ready", error: <nil>
	I1212 01:31:42.049301  233926 out.go:252] * Restarting existing kvm2 VM for "NoKubernetes-985362" ...
	I1212 01:31:42.049401  233926 main.go:143] libmachine: starting domain...
	I1212 01:31:42.049412  233926 main.go:143] libmachine: ensuring networks are active...
	I1212 01:31:42.050402  233926 main.go:143] libmachine: Ensuring network default is active
	I1212 01:31:42.050902  233926 main.go:143] libmachine: Ensuring network mk-NoKubernetes-985362 is active
	I1212 01:31:42.051652  233926 main.go:143] libmachine: getting domain XML...
	I1212 01:31:42.053345  233926 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>NoKubernetes-985362</name>
	  <uuid>7dfee705-b682-4112-9b3f-4734bea2cfb8</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/NoKubernetes-985362/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/NoKubernetes-985362/NoKubernetes-985362.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f0:39:d3'/>
	      <source network='mk-NoKubernetes-985362'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:29:b6:28'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 01:31:43.490556  233926 main.go:143] libmachine: waiting for domain to start...
	I1212 01:31:43.492261  233926 main.go:143] libmachine: domain is now running
	I1212 01:31:43.492278  233926 main.go:143] libmachine: waiting for IP...
	I1212 01:31:43.493381  233926 main.go:143] libmachine: domain NoKubernetes-985362 has defined MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.494490  233926 main.go:143] libmachine: domain NoKubernetes-985362 has current primary IP address 192.168.83.62 and MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.494510  233926 main.go:143] libmachine: found domain IP: 192.168.83.62
	I1212 01:31:43.494523  233926 main.go:143] libmachine: reserving static IP address...
	I1212 01:31:43.495335  233926 main.go:143] libmachine: unable to find host DHCP lease matching {name: "NoKubernetes-985362", mac: "52:54:00:f0:39:d3", ip: "192.168.83.62"} in network mk-NoKubernetes-985362
	I1212 01:31:43.758863  233926 main.go:143] libmachine: failed reserving static IP address 192.168.83.62 for domain NoKubernetes-985362, will continue anyway: virError(Code=55, Domain=19, Message='Requested operation is not valid: there is an existing dhcp host entry in network 'mk-NoKubernetes-985362' that matches "<host mac='52:54:00:f0:39:d3' name='NoKubernetes-985362' ip='192.168.83.62'/>"')
	I1212 01:31:43.758876  233926 main.go:143] libmachine: waiting for SSH...
	I1212 01:31:43.758902  233926 main.go:143] libmachine: Getting to WaitForSSH function...
	I1212 01:31:43.762720  233926 main.go:143] libmachine: domain NoKubernetes-985362 has defined MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.763379  233926 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:39:d3", ip: ""} in network mk-NoKubernetes-985362: {Iface:virbr5 ExpiryTime:2025-12-12 02:31:31 +0000 UTC Type:0 Mac:52:54:00:f0:39:d3 Iaid: IPaddr:192.168.83.62 Prefix:24 Hostname:nokubernetes-985362 Clientid:01:52:54:00:f0:39:d3}
	I1212 01:31:43.763398  233926 main.go:143] libmachine: domain NoKubernetes-985362 has defined IP address 192.168.83.62 and MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.763666  233926 main.go:143] libmachine: Using SSH client type: native
	I1212 01:31:43.763973  233926 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.62 22 <nil> <nil>}
	I1212 01:31:43.763979  233926 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1212 01:31:46.814114  233926 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.62:22: connect: no route to host
	I1212 01:31:45.117049  228989 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1212 01:31:45.117815  228989 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1212 01:31:45.117885  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:45.117961  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:45.163842  228989 cri.go:89] found id: "976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:45.163868  228989 cri.go:89] found id: ""
	I1212 01:31:45.163878  228989 logs.go:282] 1 containers: [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35]
	I1212 01:31:45.163964  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.169293  228989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:31:45.169358  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:45.216181  228989 cri.go:89] found id: "ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:45.216213  228989 cri.go:89] found id: ""
	I1212 01:31:45.216223  228989 logs.go:282] 1 containers: [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd]
	I1212 01:31:45.216275  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.220907  228989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:31:45.221026  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:45.269730  228989 cri.go:89] found id: "63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:45.269761  228989 cri.go:89] found id: ""
	I1212 01:31:45.269774  228989 logs.go:282] 1 containers: [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6]
	I1212 01:31:45.269851  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.274603  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:45.274678  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:45.328457  228989 cri.go:89] found id: "08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:45.328501  228989 cri.go:89] found id: "ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:45.328508  228989 cri.go:89] found id: ""
	I1212 01:31:45.328519  228989 logs.go:282] 2 containers: [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90]
	I1212 01:31:45.328598  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.334675  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.339898  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:45.340001  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:45.390328  228989 cri.go:89] found id: "d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:45.390351  228989 cri.go:89] found id: ""
	I1212 01:31:45.390358  228989 logs.go:282] 1 containers: [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3]
	I1212 01:31:45.390415  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.395747  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:45.395826  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:45.438363  228989 cri.go:89] found id: "084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:45.438396  228989 cri.go:89] found id: ""
	I1212 01:31:45.438408  228989 logs.go:282] 1 containers: [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72]
	I1212 01:31:45.438490  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.443853  228989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:45.443922  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:45.491426  228989 cri.go:89] found id: ""
	I1212 01:31:45.491453  228989 logs.go:282] 0 containers: []
	W1212 01:31:45.491478  228989 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:45.491485  228989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:31:45.491539  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:31:45.539702  228989 cri.go:89] found id: "275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:45.539739  228989 cri.go:89] found id: ""
	I1212 01:31:45.539752  228989 logs.go:282] 1 containers: [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75]
	I1212 01:31:45.539825  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.546329  228989 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:45.546365  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:45.638481  228989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:45.638511  228989 logs.go:123] Gathering logs for kube-apiserver [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35] ...
	I1212 01:31:45.638529  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:45.686260  228989 logs.go:123] Gathering logs for coredns [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6] ...
	I1212 01:31:45.686307  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:45.731121  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:45.731159  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:45.829352  228989 logs.go:123] Gathering logs for kube-scheduler [ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90] ...
	I1212 01:31:45.829393  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:45.884536  228989 logs.go:123] Gathering logs for container status ...
	I1212 01:31:45.884568  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:31:45.937650  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:45.937685  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:46.054832  228989 logs.go:123] Gathering logs for etcd [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd] ...
	I1212 01:31:46.054887  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:46.108533  228989 logs.go:123] Gathering logs for kube-proxy [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3] ...
	I1212 01:31:46.108586  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:46.154012  228989 logs.go:123] Gathering logs for kube-controller-manager [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72] ...
	I1212 01:31:46.154056  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:46.205333  228989 logs.go:123] Gathering logs for storage-provisioner [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75] ...
	I1212 01:31:46.205380  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:46.253891  228989 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:31:46.253937  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:31:46.587101  228989 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:46.587167  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:45.794900  233235 pod_ready.go:94] pod "coredns-66bc5c9577-sqrnk" is "Ready"
	I1212 01:31:45.794940  233235 pod_ready.go:86] duration metric: took 7.007987203s for pod "coredns-66bc5c9577-sqrnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:45.798539  233235 pod_ready.go:83] waiting for pod "etcd-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:31:47.805840  233235 pod_ready.go:104] pod "etcd-pause-321955" is not "Ready", error: <nil>
	I1212 01:31:49.809497  233235 pod_ready.go:94] pod "etcd-pause-321955" is "Ready"
	I1212 01:31:49.809543  233235 pod_ready.go:86] duration metric: took 4.01097435s for pod "etcd-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:49.812546  233235 pod_ready.go:83] waiting for pod "kube-apiserver-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:49.818450  233235 pod_ready.go:94] pod "kube-apiserver-pause-321955" is "Ready"
	I1212 01:31:49.818497  233235 pod_ready.go:86] duration metric: took 5.920481ms for pod "kube-apiserver-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:49.821089  233235 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.329381  233235 pod_ready.go:94] pod "kube-controller-manager-pause-321955" is "Ready"
	I1212 01:31:50.329414  233235 pod_ready.go:86] duration metric: took 508.297977ms for pod "kube-controller-manager-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.333665  233235 pod_ready.go:83] waiting for pod "kube-proxy-c7jlm" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.404498  233235 pod_ready.go:94] pod "kube-proxy-c7jlm" is "Ready"
	I1212 01:31:50.404538  233235 pod_ready.go:86] duration metric: took 70.83106ms for pod "kube-proxy-c7jlm" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.603668  233235 pod_ready.go:83] waiting for pod "kube-scheduler-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:51.004374  233235 pod_ready.go:94] pod "kube-scheduler-pause-321955" is "Ready"
	I1212 01:31:51.004412  233235 pod_ready.go:86] duration metric: took 400.708643ms for pod "kube-scheduler-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:51.004427  233235 pod_ready.go:40] duration metric: took 12.223358587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:31:51.055793  233235 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 01:31:51.057804  233235 out.go:179] * Done! kubectl is now configured to use "pause-321955" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.846737767Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d5bbf5a1-744e-4ba2-9298-f6731eaab4a9 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.848246976Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1765503092506297566,StartedAt:1765503092721040118,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.34.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1191a1102d721fa486dd34bac5e36c61/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1191a1102d721fa486dd34bac5e36c61/containers/kube-scheduler/2c002f6f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-pause-321955_1191a1102d7
21fa486dd34bac5e36c61/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d5bbf5a1-744e-4ba2-9298-f6731eaab4a9 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.850765423Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b0787b91-9621-4bfe-8cfe-62feb84c9ef2 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.850890390Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1765503092473028756,StartedAt:1765503092599088525,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.34.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/832b52968e66e650f8022e1d87ef2287/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/832b52968e66e650f8022e1d87ef2287/containers/kube-apiserver/3a9cf0ca,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRela
bel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-pause-321955_832b52968e66e650f8022e1d87ef2287/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b0787b91-9621-4bfe-8cfe-62feb84c9ef2 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.854106512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a26032e5-5f88-46d6-8252-2606a582ceba name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.854357781Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a26032e5-5f88-46d6-8252-2606a582ceba name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.856311951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06823a92-890a-4e1e-9478-3da6f117ca65 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.856797961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503111856769528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06823a92-890a-4e1e-9478-3da6f117ca65 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.857889384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5b9ac20-f5e5-4fb4-9faf-2f1863297c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.858011709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5b9ac20-f5e5-4fb4-9faf-2f1863297c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.858354282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5b9ac20-f5e5-4fb4-9faf-2f1863297c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.913229124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d296026-7981-47da-8f4d-3c7952c13769 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.913307131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d296026-7981-47da-8f4d-3c7952c13769 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.914777906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1483c49d-c054-47cb-9d5b-abe8c0b7f646 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.915667272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503111915633640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1483c49d-c054-47cb-9d5b-abe8c0b7f646 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.917142406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=917b82c9-cefa-4264-82ca-3bf5e5ac3455 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.917288511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=917b82c9-cefa-4264-82ca-3bf5e5ac3455 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.917550711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=917b82c9-cefa-4264-82ca-3bf5e5ac3455 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.970165597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc94c1db-76c0-454a-b6c1-0c235d723b54 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.970320821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc94c1db-76c0-454a-b6c1-0c235d723b54 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.972133441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67e68e1f-cb83-467e-87c0-9b7c675d2f5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.972578580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503111972553559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67e68e1f-cb83-467e-87c0-9b7c675d2f5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.973809381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=879658f4-c6e6-4149-b50e-2b50fba7056b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.973870181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=879658f4-c6e6-4149-b50e-2b50fba7056b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:51 pause-321955 crio[2831]: time="2025-12-12 01:31:51.974126369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=879658f4-c6e6-4149-b50e-2b50fba7056b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	6e806b0688e58       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   2                   22c34b2ac2cb1       coredns-66bc5c9577-sqrnk               kube-system
	2e19dfd2c1741       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   15 seconds ago      Running             kube-proxy                2                   9ec27a491136c       kube-proxy-c7jlm                       kube-system
	8b829c5fe6f50       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago      Running             kube-controller-manager   2                   562eac2695c2f       kube-controller-manager-pause-321955   kube-system
	dc7c15d67f78d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      2                   b13008a0d6477       etcd-pause-321955                      kube-system
	cbeb9a40d9bc0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            2                   3d604f5347a21       kube-apiserver-pause-321955            kube-system
	e8b513c3811f0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago      Running             kube-scheduler            2                   7d7a8a4964cab       kube-scheduler-pause-321955            kube-system
	8f7ad41c54d44       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Exited              coredns                   1                   22c34b2ac2cb1       coredns-66bc5c9577-sqrnk               kube-system
	3b248ea8d9057       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   44 seconds ago      Exited              kube-proxy                1                   9ec27a491136c       kube-proxy-c7jlm                       kube-system
	1d3e750bb1610       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   44 seconds ago      Exited              kube-controller-manager   1                   562eac2695c2f       kube-controller-manager-pause-321955   kube-system
	3564edce8f817       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   44 seconds ago      Exited              kube-apiserver            1                   3d604f5347a21       kube-apiserver-pause-321955            kube-system
	f32777ea8fb9a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   45 seconds ago      Exited              etcd                      1                   b13008a0d6477       etcd-pause-321955                      kube-system
	489b91a0c7a86       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   45 seconds ago      Exited              kube-scheduler            1                   7d7a8a4964cab       kube-scheduler-pause-321955            kube-system
	
	
	==> coredns [6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43119 - 16896 "HINFO IN 7082807878144831796.2380034269300008608. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06559116s
	
	
	==> coredns [8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38337 - 62149 "HINFO IN 4711539473721116245.2655341612649581674. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045025462s
	
	
	==> describe nodes <==
	Name:               pause-321955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-321955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=pause-321955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T01_29_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 01:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-321955
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 01:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.238
	  Hostname:    pause-321955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a1525b17ba340a5a95384ccaa1b42ab
	  System UUID:                0a1525b1-7ba3-40a5-a953-84ccaa1b42ab
	  Boot ID:                    542e2c57-4039-4990-8f87-e42f342b3296
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sqrnk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     113s
	  kube-system                 etcd-pause-321955                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         119s
	  kube-system                 kube-apiserver-pause-321955             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-pause-321955    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-c7jlm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-pause-321955             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 39s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node pause-321955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node pause-321955 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node pause-321955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node pause-321955 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node pause-321955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node pause-321955 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                117s                 kubelet          Node pause-321955 status is now: NodeReady
	  Normal  RegisteredNode           114s                 node-controller  Node pause-321955 event: Registered Node pause-321955 in Controller
	  Normal  RegisteredNode           37s                  node-controller  Node pause-321955 event: Registered Node pause-321955 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)    kubelet          Node pause-321955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)    kubelet          Node pause-321955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)    kubelet          Node pause-321955 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                  node-controller  Node pause-321955 event: Registered Node pause-321955 in Controller
	
	
	==> dmesg <==
	[Dec12 01:29] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000079] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007368] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.208293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100737] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.151218] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.677519] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.152617] kauditd_printk_skb: 143 callbacks suppressed
	[  +0.178302] kauditd_printk_skb: 18 callbacks suppressed
	[Dec12 01:30] kauditd_printk_skb: 219 callbacks suppressed
	[ +26.871128] kauditd_printk_skb: 38 callbacks suppressed
	[Dec12 01:31] kauditd_printk_skb: 297 callbacks suppressed
	[  +3.862906] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.760218] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.605964] kauditd_printk_skb: 81 callbacks suppressed
	[  +7.083963] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1] <==
	{"level":"warn","ts":"2025-12-12T01:31:34.449564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.473321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.495313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.502455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.516307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.536126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.569902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.587400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.622074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.660409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.690152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.700518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.717478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.772587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.777046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.791519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.812281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.826607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.844586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.869449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.881088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.903642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.918807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.937240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:35.053928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	
	
	==> etcd [f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0] <==
	{"level":"warn","ts":"2025-12-12T01:31:10.927547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.947658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.959057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.976088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.992714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.997948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:11.114529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T01:31:29.775140Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T01:31:29.775347Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-321955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.238:2380"],"advertise-client-urls":["https://192.168.50.238:2379"]}
	{"level":"error","ts":"2025-12-12T01:31:29.775495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:31:29.777395Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:31:29.777495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:31:29.777655Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"943d0bcc43b450ee","current-leader-member-id":"943d0bcc43b450ee"}
	{"level":"info","ts":"2025-12-12T01:31:29.777695Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-12T01:31:29.777708Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777752Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777818Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:31:29.777825Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777872Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777885Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.238:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:31:29.777893Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.238:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:31:29.782911Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.238:2380"}
	{"level":"info","ts":"2025-12-12T01:31:29.783653Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.238:2380"}
	{"level":"error","ts":"2025-12-12T01:31:29.783457Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.238:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:31:29.783709Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-321955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.238:2380"],"advertise-client-urls":["https://192.168.50.238:2379"]}
	
	
	==> kernel <==
	 01:31:52 up 2 min,  0 users,  load average: 1.46, 0.61, 0.23
	Linux pause-321955 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba] <==
	I1212 01:31:19.720613       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1212 01:31:19.720640       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I1212 01:31:19.720652       1 controller.go:132] Ending legacy_token_tracking_controller
	I1212 01:31:19.720658       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1212 01:31:19.720679       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1212 01:31:19.720694       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I1212 01:31:19.720709       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1212 01:31:19.720723       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1212 01:31:19.720998       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1212 01:31:19.721010       1 controller.go:157] Shutting down quota evaluator
	I1212 01:31:19.721024       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.720329       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 01:31:19.720370       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 01:31:19.721756       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1212 01:31:19.721809       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 01:31:19.721859       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1212 01:31:19.721987       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1212 01:31:19.722545       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.722565       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.722573       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.722579       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.723064       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1212 01:31:19.723086       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 01:31:19.723485       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1212 01:31:19.723944       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-apiserver [cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c] <==
	I1212 01:31:36.056542       1 aggregator.go:171] initial CRD sync complete...
	I1212 01:31:36.056626       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 01:31:36.056659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 01:31:36.056783       1 cache.go:39] Caches are synced for autoregister controller
	I1212 01:31:36.070518       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 01:31:36.070850       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 01:31:36.070886       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 01:31:36.074600       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 01:31:36.080960       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 01:31:36.082271       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 01:31:36.104144       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 01:31:36.104336       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 01:31:36.109591       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 01:31:36.109656       1 policy_source.go:240] refreshing policies
	I1212 01:31:36.194930       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 01:31:36.704480       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 01:31:36.880089       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 01:31:37.608593       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.238]
	I1212 01:31:37.610621       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 01:31:37.617858       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 01:31:38.159335       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 01:31:38.249409       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 01:31:38.305070       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 01:31:38.325955       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 01:31:45.560299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f] <==
	I1212 01:31:15.401076       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 01:31:15.402591       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 01:31:15.405153       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 01:31:15.406767       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 01:31:15.409903       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:31:15.414331       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 01:31:15.415583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 01:31:15.415633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 01:31:15.418046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 01:31:15.421864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:31:15.421904       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:31:15.421914       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:31:15.426268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:31:15.429486       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 01:31:15.437314       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 01:31:15.437481       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 01:31:15.437516       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 01:31:15.437566       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 01:31:15.437642       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 01:31:15.438815       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 01:31:15.438914       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 01:31:15.440478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 01:31:15.450055       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 01:31:15.450271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:31:15.454126       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-controller-manager [8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f] <==
	I1212 01:31:39.524946       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 01:31:39.525669       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 01:31:39.525947       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 01:31:39.525961       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 01:31:39.525969       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 01:31:39.528878       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:31:39.530795       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 01:31:39.530901       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 01:31:39.530975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-321955"
	I1212 01:31:39.531783       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 01:31:39.531978       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 01:31:39.534119       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 01:31:39.534524       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 01:31:39.540796       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 01:31:39.541398       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 01:31:39.541405       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 01:31:39.541654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 01:31:39.545873       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 01:31:39.546024       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 01:31:39.548478       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 01:31:39.550856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:31:39.560950       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:31:39.562557       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 01:31:39.563863       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 01:31:39.570591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a] <==
	I1212 01:31:37.393998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:31:37.495778       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:31:37.495887       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.238"]
	E1212 01:31:37.495990       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:31:37.566661       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 01:31:37.566789       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:31:37.566851       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:31:37.581144       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:31:37.581672       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:31:37.581740       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:31:37.588155       1 config.go:200] "Starting service config controller"
	I1212 01:31:37.588169       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:31:37.588354       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:31:37.588373       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:31:37.588423       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:31:37.588439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:31:37.588968       1 config.go:309] "Starting node config controller"
	I1212 01:31:37.589007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:31:37.589023       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:31:37.688405       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:31:37.688504       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 01:31:37.688508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155] <==
	I1212 01:31:10.518685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:31:12.222142       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:31:12.222364       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.238"]
	E1212 01:31:12.222477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:31:12.310397       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 01:31:12.310521       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:31:12.310571       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:31:12.339663       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:31:12.341168       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:31:12.341423       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:31:12.360927       1 config.go:200] "Starting service config controller"
	I1212 01:31:12.361074       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:31:12.361104       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:31:12.361110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:31:12.361125       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:31:12.361130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:31:12.372851       1 config.go:309] "Starting node config controller"
	I1212 01:31:12.372897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:31:12.372907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:31:12.461493       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:31:12.461529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:31:12.461550       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274] <==
	I1212 01:31:12.054798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 01:31:12.106422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:31:12.107083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 01:31:12.107576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 01:31:12.107821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 01:31:12.108017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 01:31:12.110893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 01:31:12.111334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 01:31:12.115833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 01:31:12.115950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 01:31:12.116027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 01:31:12.117512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 01:31:12.117747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 01:31:12.117950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 01:31:12.118058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 01:31:12.118070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 01:31:12.120581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 01:31:12.120813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1212 01:31:12.155417       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:29.919908       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 01:31:29.920384       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 01:31:29.920496       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 01:31:29.920611       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:29.920727       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1212 01:31:29.920772       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb] <==
	I1212 01:31:33.840490       1 serving.go:386] Generated self-signed cert in-memory
	W1212 01:31:35.948572       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 01:31:35.948789       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 01:31:35.948822       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 01:31:35.948914       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 01:31:36.046819       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 01:31:36.047656       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:31:36.056160       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 01:31:36.057034       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:36.059627       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:36.058068       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 01:31:36.160628       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.084357    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.132085    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-321955\" already exists" pod="kube-system/kube-scheduler-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.132259    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.142743    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-321955\" already exists" pod="kube-system/etcd-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.142778    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.158513    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-321955\" already exists" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.158749    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.169977    3975 kubelet_node_status.go:124] "Node was previously registered" node="pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.170101    3975 kubelet_node_status.go:78] "Successfully registered node" node="pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.170138    3975 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.173489    3975 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.174821    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-321955\" already exists" pod="kube-system/kube-controller-manager-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.282123    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.294896    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-321955\" already exists" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.658665    3975 apiserver.go:52] "Watching apiserver"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.687557    3975 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.697149    3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347a9cc8-e305-4635-be48-63cd11a20559-lib-modules\") pod \"kube-proxy-c7jlm\" (UID: \"347a9cc8-e305-4635-be48-63cd11a20559\") " pod="kube-system/kube-proxy-c7jlm"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.697396    3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347a9cc8-e305-4635-be48-63cd11a20559-xtables-lock\") pod \"kube-proxy-c7jlm\" (UID: \"347a9cc8-e305-4635-be48-63cd11a20559\") " pod="kube-system/kube-proxy-c7jlm"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.963804    3975 scope.go:117] "RemoveContainer" containerID="8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.965756    3975 scope.go:117] "RemoveContainer" containerID="3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155"
	Dec 12 01:31:41 pause-321955 kubelet[3975]: E1212 01:31:41.831607    3975 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765503101830404612 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 12 01:31:41 pause-321955 kubelet[3975]: E1212 01:31:41.832171    3975 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765503101830404612 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 12 01:31:45 pause-321955 kubelet[3975]: I1212 01:31:45.506329    3975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 01:31:51 pause-321955 kubelet[3975]: E1212 01:31:51.836450    3975 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765503111833793146 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 12 01:31:51 pause-321955 kubelet[3975]: E1212 01:31:51.836473    3975 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765503111833793146 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-321955 -n pause-321955
helpers_test.go:270: (dbg) Run:  kubectl --context pause-321955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-321955 -n pause-321955
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-321955 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-321955 logs -n 25: (1.744722096s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                        ARGS                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-028084 sudo cat /etc/docker/daemon.json                                                                   │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo docker system info                                                                            │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl status cri-docker --all --full --no-pager                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl cat cri-docker --no-pager                                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                      │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /usr/lib/systemd/system/cri-docker.service                                                │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cri-dockerd --version                                                                         │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl status containerd --all --full --no-pager                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl cat containerd --no-pager                                                           │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /lib/systemd/system/containerd.service                                                    │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo cat /etc/containerd/config.toml                                                               │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo containerd config dump                                                                        │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl status crio --all --full --no-pager                                                 │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo systemctl cat crio --no-pager                                                                 │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                       │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ ssh     │ -p cilium-028084 sudo crio config                                                                                   │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │                     │
	│ delete  │ -p cilium-028084                                                                                                    │ cilium-028084          │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:30 UTC │
	│ start   │ -p cert-expiration-809349 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                │ cert-expiration-809349 │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p pause-321955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                      │ pause-321955           │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:30 UTC │ 12 Dec 25 01:31 UTC │
	│ delete  │ -p NoKubernetes-985362                                                                                              │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │ 12 Dec 25 01:31 UTC │
	│ ssh     │ -p NoKubernetes-985362 sudo systemctl is-active --quiet service kubelet                                             │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p NoKubernetes-985362                                                                                              │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │ 12 Dec 25 01:31 UTC │
	│ start   │ -p NoKubernetes-985362 --driver=kvm2  --container-runtime=crio                                                      │ NoKubernetes-985362    │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:31:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:31:41.818155  233926 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:31:41.818544  233926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:31:41.818551  233926 out.go:374] Setting ErrFile to fd 2...
	I1212 01:31:41.818556  233926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:31:41.818897  233926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:31:41.819611  233926 out.go:368] Setting JSON to false
	I1212 01:31:41.821041  233926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26046,"bootTime":1765477056,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:31:41.821115  233926 start.go:143] virtualization: kvm guest
	I1212 01:31:41.822920  233926 out.go:179] * [NoKubernetes-985362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 01:31:41.824537  233926 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:31:41.824553  233926 notify.go:221] Checking for updates...
	I1212 01:31:41.828796  233926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:31:41.830082  233926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:31:41.831953  233926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 01:31:41.833644  233926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:31:41.835166  233926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:31:41.837327  233926 config.go:182] Loaded profile config "NoKubernetes-985362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 01:31:41.838091  233926 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1212 01:31:41.838123  233926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:31:41.880844  233926 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 01:31:41.882184  233926 start.go:309] selected driver: kvm2
	I1212 01:31:41.882194  233926 start.go:927] validating driver "kvm2" against &{Name:NoKubernetes-985362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-985362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.62 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:31:41.882320  233926 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:31:41.883323  233926 cni.go:84] Creating CNI manager for ""
	I1212 01:31:41.883376  233926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:31:41.883415  233926 start.go:353] cluster config:
	{Name:NoKubernetes-985362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-985362 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.62 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:31:41.883622  233926 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:31:41.885759  233926 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-985362
	I1212 01:31:41.887090  233926 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1212 01:31:41.911634  233926 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1212 01:31:42.044641  233926 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 01:31:42.044811  233926 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/NoKubernetes-985362/config.json ...
	I1212 01:31:42.045129  233926 start.go:360] acquireMachinesLock for NoKubernetes-985362: {Name:mk7557506c78bc6cb73692cb48d3039f590aa12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:31:42.045195  233926 start.go:364] duration metric: took 46.936µs to acquireMachinesLock for "NoKubernetes-985362"
	I1212 01:31:42.045211  233926 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:31:42.045217  233926 fix.go:54] fixHost starting: 
	I1212 01:31:42.047577  233926 fix.go:112] recreateIfNeeded on NoKubernetes-985362: state=Stopped err=<nil>
	W1212 01:31:42.047600  233926 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:31:38.387986  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:38.388019  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:38.502237  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:38.502285  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:41.101483  228989 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1212 01:31:41.102291  228989 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1212 01:31:41.102365  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:41.102423  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:41.146347  228989 cri.go:89] found id: "976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:41.146375  228989 cri.go:89] found id: ""
	I1212 01:31:41.146386  228989 logs.go:282] 1 containers: [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35]
	I1212 01:31:41.146450  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.151231  228989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:31:41.151301  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:41.197579  228989 cri.go:89] found id: "ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:41.197610  228989 cri.go:89] found id: ""
	I1212 01:31:41.197621  228989 logs.go:282] 1 containers: [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd]
	I1212 01:31:41.197697  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.202529  228989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:31:41.202615  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:41.246730  228989 cri.go:89] found id: "63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:41.246759  228989 cri.go:89] found id: ""
	I1212 01:31:41.246767  228989 logs.go:282] 1 containers: [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6]
	I1212 01:31:41.246823  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.251723  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:41.251812  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:41.297993  228989 cri.go:89] found id: "08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:41.298019  228989 cri.go:89] found id: "ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:41.298026  228989 cri.go:89] found id: ""
	I1212 01:31:41.298037  228989 logs.go:282] 2 containers: [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90]
	I1212 01:31:41.298107  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.304325  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.310621  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:41.310710  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:41.361133  228989 cri.go:89] found id: "d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:41.361166  228989 cri.go:89] found id: ""
	I1212 01:31:41.361177  228989 logs.go:282] 1 containers: [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3]
	I1212 01:31:41.361238  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.366204  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:41.366272  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:41.408421  228989 cri.go:89] found id: "084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:41.408450  228989 cri.go:89] found id: ""
	I1212 01:31:41.408473  228989 logs.go:282] 1 containers: [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72]
	I1212 01:31:41.408531  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.415361  228989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:41.415430  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:41.469627  228989 cri.go:89] found id: ""
	I1212 01:31:41.469652  228989 logs.go:282] 0 containers: []
	W1212 01:31:41.469659  228989 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:41.469666  228989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:31:41.469716  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:31:41.525627  228989 cri.go:89] found id: "275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:41.525662  228989 cri.go:89] found id: ""
	I1212 01:31:41.525675  228989 logs.go:282] 1 containers: [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75]
	I1212 01:31:41.525746  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:41.531922  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:41.531952  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:41.676750  228989 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:41.676800  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:41.773296  228989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:41.773323  228989 logs.go:123] Gathering logs for etcd [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd] ...
	I1212 01:31:41.773341  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:41.831636  228989 logs.go:123] Gathering logs for coredns [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6] ...
	I1212 01:31:41.831688  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:41.887852  228989 logs.go:123] Gathering logs for kube-scheduler [ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90] ...
	I1212 01:31:41.887878  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:41.931401  228989 logs.go:123] Gathering logs for kube-proxy [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3] ...
	I1212 01:31:41.931433  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:41.978985  228989 logs.go:123] Gathering logs for kube-controller-manager [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72] ...
	I1212 01:31:41.979027  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:42.026024  228989 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:31:42.026063  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:31:42.337568  228989 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:42.337607  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:42.353715  228989 logs.go:123] Gathering logs for kube-apiserver [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35] ...
	I1212 01:31:42.353756  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:42.409284  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:42.409325  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:42.510149  228989 logs.go:123] Gathering logs for storage-provisioner [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75] ...
	I1212 01:31:42.510196  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:42.552418  228989 logs.go:123] Gathering logs for container status ...
	I1212 01:31:42.552449  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:40.794709  233235 pod_ready.go:104] pod "coredns-66bc5c9577-sqrnk" is not "Ready", error: <nil>
	W1212 01:31:43.293063  233235 pod_ready.go:104] pod "coredns-66bc5c9577-sqrnk" is not "Ready", error: <nil>
	W1212 01:31:45.297176  233235 pod_ready.go:104] pod "coredns-66bc5c9577-sqrnk" is not "Ready", error: <nil>
	I1212 01:31:42.049301  233926 out.go:252] * Restarting existing kvm2 VM for "NoKubernetes-985362" ...
	I1212 01:31:42.049401  233926 main.go:143] libmachine: starting domain...
	I1212 01:31:42.049412  233926 main.go:143] libmachine: ensuring networks are active...
	I1212 01:31:42.050402  233926 main.go:143] libmachine: Ensuring network default is active
	I1212 01:31:42.050902  233926 main.go:143] libmachine: Ensuring network mk-NoKubernetes-985362 is active
	I1212 01:31:42.051652  233926 main.go:143] libmachine: getting domain XML...
	I1212 01:31:42.053345  233926 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>NoKubernetes-985362</name>
	  <uuid>7dfee705-b682-4112-9b3f-4734bea2cfb8</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/NoKubernetes-985362/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22101-186349/.minikube/machines/NoKubernetes-985362/NoKubernetes-985362.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f0:39:d3'/>
	      <source network='mk-NoKubernetes-985362'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:29:b6:28'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 01:31:43.490556  233926 main.go:143] libmachine: waiting for domain to start...
	I1212 01:31:43.492261  233926 main.go:143] libmachine: domain is now running
	I1212 01:31:43.492278  233926 main.go:143] libmachine: waiting for IP...
	I1212 01:31:43.493381  233926 main.go:143] libmachine: domain NoKubernetes-985362 has defined MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.494490  233926 main.go:143] libmachine: domain NoKubernetes-985362 has current primary IP address 192.168.83.62 and MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.494510  233926 main.go:143] libmachine: found domain IP: 192.168.83.62
	I1212 01:31:43.494523  233926 main.go:143] libmachine: reserving static IP address...
	I1212 01:31:43.495335  233926 main.go:143] libmachine: unable to find host DHCP lease matching {name: "NoKubernetes-985362", mac: "52:54:00:f0:39:d3", ip: "192.168.83.62"} in network mk-NoKubernetes-985362
	I1212 01:31:43.758863  233926 main.go:143] libmachine: failed reserving static IP address 192.168.83.62 for domain NoKubernetes-985362, will continue anyway: virError(Code=55, Domain=19, Message='Requested operation is not valid: there is an existing dhcp host entry in network 'mk-NoKubernetes-985362' that matches "<host mac='52:54:00:f0:39:d3' name='NoKubernetes-985362' ip='192.168.83.62'/>"')
	I1212 01:31:43.758876  233926 main.go:143] libmachine: waiting for SSH...
	I1212 01:31:43.758902  233926 main.go:143] libmachine: Getting to WaitForSSH function...
	I1212 01:31:43.762720  233926 main.go:143] libmachine: domain NoKubernetes-985362 has defined MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.763379  233926 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f0:39:d3", ip: ""} in network mk-NoKubernetes-985362: {Iface:virbr5 ExpiryTime:2025-12-12 02:31:31 +0000 UTC Type:0 Mac:52:54:00:f0:39:d3 Iaid: IPaddr:192.168.83.62 Prefix:24 Hostname:nokubernetes-985362 Clientid:01:52:54:00:f0:39:d3}
	I1212 01:31:43.763398  233926 main.go:143] libmachine: domain NoKubernetes-985362 has defined IP address 192.168.83.62 and MAC address 52:54:00:f0:39:d3 in network mk-NoKubernetes-985362
	I1212 01:31:43.763666  233926 main.go:143] libmachine: Using SSH client type: native
	I1212 01:31:43.763973  233926 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.62 22 <nil> <nil>}
	I1212 01:31:43.763979  233926 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1212 01:31:46.814114  233926 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.62:22: connect: no route to host
	I1212 01:31:45.117049  228989 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1212 01:31:45.117815  228989 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1212 01:31:45.117885  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:45.117961  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:45.163842  228989 cri.go:89] found id: "976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:45.163868  228989 cri.go:89] found id: ""
	I1212 01:31:45.163878  228989 logs.go:282] 1 containers: [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35]
	I1212 01:31:45.163964  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.169293  228989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:31:45.169358  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:45.216181  228989 cri.go:89] found id: "ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:45.216213  228989 cri.go:89] found id: ""
	I1212 01:31:45.216223  228989 logs.go:282] 1 containers: [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd]
	I1212 01:31:45.216275  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.220907  228989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:31:45.221026  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:45.269730  228989 cri.go:89] found id: "63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:45.269761  228989 cri.go:89] found id: ""
	I1212 01:31:45.269774  228989 logs.go:282] 1 containers: [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6]
	I1212 01:31:45.269851  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.274603  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:45.274678  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:45.328457  228989 cri.go:89] found id: "08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:45.328501  228989 cri.go:89] found id: "ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:45.328508  228989 cri.go:89] found id: ""
	I1212 01:31:45.328519  228989 logs.go:282] 2 containers: [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90]
	I1212 01:31:45.328598  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.334675  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.339898  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:45.340001  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:45.390328  228989 cri.go:89] found id: "d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:45.390351  228989 cri.go:89] found id: ""
	I1212 01:31:45.390358  228989 logs.go:282] 1 containers: [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3]
	I1212 01:31:45.390415  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.395747  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:45.395826  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:45.438363  228989 cri.go:89] found id: "084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:45.438396  228989 cri.go:89] found id: ""
	I1212 01:31:45.438408  228989 logs.go:282] 1 containers: [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72]
	I1212 01:31:45.438490  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.443853  228989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:45.443922  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:45.491426  228989 cri.go:89] found id: ""
	I1212 01:31:45.491453  228989 logs.go:282] 0 containers: []
	W1212 01:31:45.491478  228989 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:45.491485  228989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:31:45.491539  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:31:45.539702  228989 cri.go:89] found id: "275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:45.539739  228989 cri.go:89] found id: ""
	I1212 01:31:45.539752  228989 logs.go:282] 1 containers: [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75]
	I1212 01:31:45.539825  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:45.546329  228989 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:45.546365  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:45.638481  228989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:45.638511  228989 logs.go:123] Gathering logs for kube-apiserver [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35] ...
	I1212 01:31:45.638529  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:45.686260  228989 logs.go:123] Gathering logs for coredns [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6] ...
	I1212 01:31:45.686307  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:45.731121  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:45.731159  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:45.829352  228989 logs.go:123] Gathering logs for kube-scheduler [ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90] ...
	I1212 01:31:45.829393  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:45.884536  228989 logs.go:123] Gathering logs for container status ...
	I1212 01:31:45.884568  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:31:45.937650  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:45.937685  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:46.054832  228989 logs.go:123] Gathering logs for etcd [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd] ...
	I1212 01:31:46.054887  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:46.108533  228989 logs.go:123] Gathering logs for kube-proxy [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3] ...
	I1212 01:31:46.108586  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:46.154012  228989 logs.go:123] Gathering logs for kube-controller-manager [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72] ...
	I1212 01:31:46.154056  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:46.205333  228989 logs.go:123] Gathering logs for storage-provisioner [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75] ...
	I1212 01:31:46.205380  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:46.253891  228989 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:31:46.253937  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:31:46.587101  228989 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:46.587167  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:45.794900  233235 pod_ready.go:94] pod "coredns-66bc5c9577-sqrnk" is "Ready"
	I1212 01:31:45.794940  233235 pod_ready.go:86] duration metric: took 7.007987203s for pod "coredns-66bc5c9577-sqrnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:45.798539  233235 pod_ready.go:83] waiting for pod "etcd-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:31:47.805840  233235 pod_ready.go:104] pod "etcd-pause-321955" is not "Ready", error: <nil>
	I1212 01:31:49.809497  233235 pod_ready.go:94] pod "etcd-pause-321955" is "Ready"
	I1212 01:31:49.809543  233235 pod_ready.go:86] duration metric: took 4.01097435s for pod "etcd-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:49.812546  233235 pod_ready.go:83] waiting for pod "kube-apiserver-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:49.818450  233235 pod_ready.go:94] pod "kube-apiserver-pause-321955" is "Ready"
	I1212 01:31:49.818497  233235 pod_ready.go:86] duration metric: took 5.920481ms for pod "kube-apiserver-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:49.821089  233235 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.329381  233235 pod_ready.go:94] pod "kube-controller-manager-pause-321955" is "Ready"
	I1212 01:31:50.329414  233235 pod_ready.go:86] duration metric: took 508.297977ms for pod "kube-controller-manager-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.333665  233235 pod_ready.go:83] waiting for pod "kube-proxy-c7jlm" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.404498  233235 pod_ready.go:94] pod "kube-proxy-c7jlm" is "Ready"
	I1212 01:31:50.404538  233235 pod_ready.go:86] duration metric: took 70.83106ms for pod "kube-proxy-c7jlm" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:50.603668  233235 pod_ready.go:83] waiting for pod "kube-scheduler-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:51.004374  233235 pod_ready.go:94] pod "kube-scheduler-pause-321955" is "Ready"
	I1212 01:31:51.004412  233235 pod_ready.go:86] duration metric: took 400.708643ms for pod "kube-scheduler-pause-321955" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:31:51.004427  233235 pod_ready.go:40] duration metric: took 12.223358587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:31:51.055793  233235 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 01:31:51.057804  233235 out.go:179] * Done! kubectl is now configured to use "pause-321955" cluster and "default" namespace by default
	I1212 01:31:49.106580  228989 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1212 01:31:49.107368  228989 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1212 01:31:49.107436  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:49.107532  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:49.154744  228989 cri.go:89] found id: "976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:49.154775  228989 cri.go:89] found id: ""
	I1212 01:31:49.154787  228989 logs.go:282] 1 containers: [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35]
	I1212 01:31:49.154869  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.159966  228989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:31:49.160036  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:49.208937  228989 cri.go:89] found id: "ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:49.208973  228989 cri.go:89] found id: ""
	I1212 01:31:49.208984  228989 logs.go:282] 1 containers: [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd]
	I1212 01:31:49.209053  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.214628  228989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:31:49.214715  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:49.265681  228989 cri.go:89] found id: "63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:49.265717  228989 cri.go:89] found id: ""
	I1212 01:31:49.265729  228989 logs.go:282] 1 containers: [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6]
	I1212 01:31:49.265801  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.271749  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:49.271855  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:49.317086  228989 cri.go:89] found id: "08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:49.317117  228989 cri.go:89] found id: "ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:49.317123  228989 cri.go:89] found id: ""
	I1212 01:31:49.317134  228989 logs.go:282] 2 containers: [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90]
	I1212 01:31:49.317217  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.323743  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.328890  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:49.328990  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:49.390296  228989 cri.go:89] found id: "d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:49.390328  228989 cri.go:89] found id: ""
	I1212 01:31:49.390340  228989 logs.go:282] 1 containers: [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3]
	I1212 01:31:49.390395  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.395900  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:49.395988  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:49.441002  228989 cri.go:89] found id: "084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:49.441031  228989 cri.go:89] found id: ""
	I1212 01:31:49.441042  228989 logs.go:282] 1 containers: [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72]
	I1212 01:31:49.441096  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.447924  228989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:49.448017  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:49.490474  228989 cri.go:89] found id: ""
	I1212 01:31:49.490509  228989 logs.go:282] 0 containers: []
	W1212 01:31:49.490518  228989 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:49.490524  228989 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:31:49.490588  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:31:49.537839  228989 cri.go:89] found id: "275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:49.537866  228989 cri.go:89] found id: ""
	I1212 01:31:49.537877  228989 logs.go:282] 1 containers: [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75]
	I1212 01:31:49.537948  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:49.543535  228989 logs.go:123] Gathering logs for coredns [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6] ...
	I1212 01:31:49.543572  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:49.591205  228989 logs.go:123] Gathering logs for kube-controller-manager [084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72] ...
	I1212 01:31:49.591241  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 084c8c1dff63f453a3753800ca7325184dc7fc91c3e179811f2929446933ab72"
	I1212 01:31:49.632746  228989 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:49.632780  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:49.730823  228989 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:49.730869  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:49.747644  228989 logs.go:123] Gathering logs for kube-scheduler [08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d] ...
	I1212 01:31:49.747683  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08bc0a480257c2ed170605f6c54ff927e4ab4f9a89a248e219b6d0dea234c13d"
	I1212 01:31:49.840586  228989 logs.go:123] Gathering logs for kube-scheduler [ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90] ...
	I1212 01:31:49.840636  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea922cf1eff990b8f9f49f36fc880f2775aa2b42cea0b0bd28111ee379078c90"
	I1212 01:31:49.889984  228989 logs.go:123] Gathering logs for kube-proxy [d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3] ...
	I1212 01:31:49.890031  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3f6454d6451822a34811149b8659d1f00171a0f0c25c4d6a20346fe63e2f1b3"
	I1212 01:31:49.938808  228989 logs.go:123] Gathering logs for storage-provisioner [275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75] ...
	I1212 01:31:49.938851  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 275a1fe81f36676345ea7210e714a8f543225bafc771e6987a6c0a51e5578c75"
	I1212 01:31:49.991030  228989 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:31:49.991063  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:31:50.326228  228989 logs.go:123] Gathering logs for container status ...
	I1212 01:31:50.326269  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:31:50.400250  228989 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:50.400293  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:50.499484  228989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:50.499516  228989 logs.go:123] Gathering logs for kube-apiserver [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35] ...
	I1212 01:31:50.499533  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:50.543015  228989 logs.go:123] Gathering logs for etcd [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd] ...
	I1212 01:31:50.543057  228989 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:53.096551  228989 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1212 01:31:53.097492  228989 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1212 01:31:53.097577  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:53.097651  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:53.160499  228989 cri.go:89] found id: "976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35"
	I1212 01:31:53.160532  228989 cri.go:89] found id: ""
	I1212 01:31:53.160545  228989 logs.go:282] 1 containers: [976a72e0be9e46769b6f830044d9ab2648b35a167e1cef0e9686dde921211e35]
	I1212 01:31:53.160618  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:53.166055  228989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:31:53.166141  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:53.222126  228989 cri.go:89] found id: "ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd"
	I1212 01:31:53.222160  228989 cri.go:89] found id: ""
	I1212 01:31:53.222172  228989 logs.go:282] 1 containers: [ed8b0c1d1a736dd495f0a7dc55172dfb90514ca93c04c162bf67b7ab9f990cdd]
	I1212 01:31:53.222241  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:53.229318  228989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:31:53.229407  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:53.284199  228989 cri.go:89] found id: "63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6"
	I1212 01:31:53.284240  228989 cri.go:89] found id: ""
	I1212 01:31:53.284251  228989 logs.go:282] 1 containers: [63ed4f4a6321e75e14042cc57d507dcd46ddd11b2569502320a3eaa3ae9201d6]
	I1212 01:31:53.284309  228989 ssh_runner.go:195] Run: which crictl
	I1212 01:31:53.291046  228989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:53.291144  228989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	
	
	==> CRI-O <==
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.062250162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503114062160519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b5c3c0c-2658-4dbb-a9d9-a359a9fe13e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.064783732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbbcbef0-4983-4ff2-94b4-9cdf9349dcd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.065054143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbbcbef0-4983-4ff2-94b4-9cdf9349dcd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.065753621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbbcbef0-4983-4ff2-94b4-9cdf9349dcd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.118367164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70e6ccb4-5f03-4615-9576-4f1c0ba07ef0 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.118797725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70e6ccb4-5f03-4615-9576-4f1c0ba07ef0 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.121426183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a083ad1-d0a0-44d0-a23d-78ea9db00f29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.122059059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503114122035066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a083ad1-d0a0-44d0-a23d-78ea9db00f29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.123291294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c8286c3-ab86-4c29-9e44-35ade205e266 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.123367856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c8286c3-ab86-4c29-9e44-35ade205e266 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.123722139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c8286c3-ab86-4c29-9e44-35ade205e266 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.194318609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=228bab54-9477-4dff-8781-684bcec04d9a name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.194404348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=228bab54-9477-4dff-8781-684bcec04d9a name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.197074970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbadb80c-3164-4eea-851d-1f129c2bb8b1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.197746827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503114197718415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbadb80c-3164-4eea-851d-1f129c2bb8b1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.198994755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=229ee5e9-15ce-4bd3-a769-373031271b14 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.199056114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=229ee5e9-15ce-4bd3-a769-373031271b14 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.199372556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=229ee5e9-15ce-4bd3-a769-373031271b14 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.263923349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25d06476-de05-452d-a101-971d7956b2cd name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.264026613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25d06476-de05-452d-a101-971d7956b2cd name=/runtime.v1.RuntimeService/Version
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.266306858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fd4c991-31eb-4978-bfa8-4ebca22c2d38 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.266895265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765503114266864832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fd4c991-31eb-4978-bfa8-4ebca22c2d38 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.268733410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b89def8d-d311-4c2f-89a0-0a2c7a026db4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.268828612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b89def8d-d311-4c2f-89a0-0a2c7a026db4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:31:54 pause-321955 crio[2831]: time="2025-12-12 01:31:54.269255766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765503097019602717,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765503096997737615,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765503092381751707,Labels:map[string]string{io.kubernetes.container.name: etc
d,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNN
ING,CreatedAt:1765503092395850304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765503092353803447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765503092364116878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7,PodSandboxId:22c34b2ac2cb14268faac37a7c9becaf6398a3b27d9a76
c363f76e16301c2b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765503068540728773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqrnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682e0dc-cf65-4601-8452-e42ae19763db,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155,PodSandboxId:9ec27a491136c35a59905598cb75be32a304ff5bec8e1f8935f9fa35deb5e5d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765503067232578008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c7jlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9cc8-e305-4635-be48-63cd11a20559,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f,PodSandboxId:562eac2695c2fadba9c037a21a21390adbbf911c8292c4d9b7ac3a3b87d67409,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765503067144093051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75019f0cf279931091d7fa9bc37851a4,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274,PodSandboxId:7d7a8a4964cabbf177b700f637c8aae39c0159f3820ee23d900bd9cbd859ed60,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765503067001565027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1191a1102d721fa486dd34bac5e36c61,},
Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba,PodSandboxId:3d604f5347a2195e2e3203905b5e85edc1fc153720eb3a7ccf075573a746cac0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765503067113425363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-32
1955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b52968e66e650f8022e1d87ef2287,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0,PodSandboxId:b13008a0d647786b16c18a7ae2fec7f16a516e21d1ca7a206e8135419dad411c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765503067072395362,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-321955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e59694aafa10cc78935259003e0bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b89def8d-d311-4c2f-89a0-0a2c7a026db4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	6e806b0688e58       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   2                   22c34b2ac2cb1       coredns-66bc5c9577-sqrnk               kube-system
	2e19dfd2c1741       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   17 seconds ago      Running             kube-proxy                2                   9ec27a491136c       kube-proxy-c7jlm                       kube-system
	8b829c5fe6f50       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   21 seconds ago      Running             kube-controller-manager   2                   562eac2695c2f       kube-controller-manager-pause-321955   kube-system
	dc7c15d67f78d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   22 seconds ago      Running             etcd                      2                   b13008a0d6477       etcd-pause-321955                      kube-system
	cbeb9a40d9bc0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   22 seconds ago      Running             kube-apiserver            2                   3d604f5347a21       kube-apiserver-pause-321955            kube-system
	e8b513c3811f0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   22 seconds ago      Running             kube-scheduler            2                   7d7a8a4964cab       kube-scheduler-pause-321955            kube-system
	8f7ad41c54d44       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   45 seconds ago      Exited              coredns                   1                   22c34b2ac2cb1       coredns-66bc5c9577-sqrnk               kube-system
	3b248ea8d9057       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   47 seconds ago      Exited              kube-proxy                1                   9ec27a491136c       kube-proxy-c7jlm                       kube-system
	1d3e750bb1610       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   47 seconds ago      Exited              kube-controller-manager   1                   562eac2695c2f       kube-controller-manager-pause-321955   kube-system
	3564edce8f817       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   47 seconds ago      Exited              kube-apiserver            1                   3d604f5347a21       kube-apiserver-pause-321955            kube-system
	f32777ea8fb9a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   47 seconds ago      Exited              etcd                      1                   b13008a0d6477       etcd-pause-321955                      kube-system
	489b91a0c7a86       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   47 seconds ago      Exited              kube-scheduler            1                   7d7a8a4964cab       kube-scheduler-pause-321955            kube-system
	
	
	==> coredns [6e806b0688e58f21d2419f66d408b82e2f3b54a230ee726cb4b3e1e51136340b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43119 - 16896 "HINFO IN 7082807878144831796.2380034269300008608. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06559116s
	
	
	==> coredns [8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38337 - 62149 "HINFO IN 4711539473721116245.2655341612649581674. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045025462s
	
	
	==> describe nodes <==
	Name:               pause-321955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-321955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=pause-321955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T01_29_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 01:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-321955
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 01:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 01:31:36 +0000   Fri, 12 Dec 2025 01:29:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.238
	  Hostname:    pause-321955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a1525b17ba340a5a95384ccaa1b42ab
	  System UUID:                0a1525b1-7ba3-40a5-a953-84ccaa1b42ab
	  Boot ID:                    542e2c57-4039-4990-8f87-e42f342b3296
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sqrnk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     115s
	  kube-system                 etcd-pause-321955                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m1s
	  kube-system                 kube-apiserver-pause-321955             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-pause-321955    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-c7jlm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-pause-321955             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 42s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node pause-321955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node pause-321955 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node pause-321955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node pause-321955 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node pause-321955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node pause-321955 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                119s                 kubelet          Node pause-321955 status is now: NodeReady
	  Normal  RegisteredNode           116s                 node-controller  Node pause-321955 event: Registered Node pause-321955 in Controller
	  Normal  RegisteredNode           39s                  node-controller  Node pause-321955 event: Registered Node pause-321955 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x8 over 23s)    kubelet          Node pause-321955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 23s)    kubelet          Node pause-321955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 23s)    kubelet          Node pause-321955 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                  node-controller  Node pause-321955 event: Registered Node pause-321955 in Controller
	
	
	==> dmesg <==
	[Dec12 01:29] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000079] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007368] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.208293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100737] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.151218] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.677519] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.152617] kauditd_printk_skb: 143 callbacks suppressed
	[  +0.178302] kauditd_printk_skb: 18 callbacks suppressed
	[Dec12 01:30] kauditd_printk_skb: 219 callbacks suppressed
	[ +26.871128] kauditd_printk_skb: 38 callbacks suppressed
	[Dec12 01:31] kauditd_printk_skb: 297 callbacks suppressed
	[  +3.862906] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.760218] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.605964] kauditd_printk_skb: 81 callbacks suppressed
	[  +7.083963] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [dc7c15d67f78dfcc7da5a2390448706f08211115b08e31010a6a56c30b6900b1] <==
	{"level":"warn","ts":"2025-12-12T01:31:34.449564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.473321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.495313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.502455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.516307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.536126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.569902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.587400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.622074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.660409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.690152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.700518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.717478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.772587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.777046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.791519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.812281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.826607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.844586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.869449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.881088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.903642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.918807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:34.937240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:35.053928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	
	
	==> etcd [f32777ea8fb9a99aeca7f459cfa91aad8fbe50fbee86e0a26bc6b98135e414a0] <==
	{"level":"warn","ts":"2025-12-12T01:31:10.927547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.947658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.959057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.976088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.992714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:10.997948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:31:11.114529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T01:31:29.775140Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T01:31:29.775347Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-321955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.238:2380"],"advertise-client-urls":["https://192.168.50.238:2379"]}
	{"level":"error","ts":"2025-12-12T01:31:29.775495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:31:29.777395Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:31:29.777495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:31:29.777655Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"943d0bcc43b450ee","current-leader-member-id":"943d0bcc43b450ee"}
	{"level":"info","ts":"2025-12-12T01:31:29.777695Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-12T01:31:29.777708Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777752Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777818Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:31:29.777825Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777872Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:31:29.777885Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.238:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:31:29.777893Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.238:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:31:29.782911Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.238:2380"}
	{"level":"info","ts":"2025-12-12T01:31:29.783653Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.238:2380"}
	{"level":"error","ts":"2025-12-12T01:31:29.783457Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.238:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:31:29.783709Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-321955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.238:2380"],"advertise-client-urls":["https://192.168.50.238:2379"]}
	
	
	==> kernel <==
	 01:31:54 up 2 min,  0 users,  load average: 1.35, 0.60, 0.23
	Linux pause-321955 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3564edce8f817af98da30719082054a0c6a1f399e691b48cc243640a05b7e9ba] <==
	I1212 01:31:19.720613       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1212 01:31:19.720640       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I1212 01:31:19.720652       1 controller.go:132] Ending legacy_token_tracking_controller
	I1212 01:31:19.720658       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1212 01:31:19.720679       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1212 01:31:19.720694       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I1212 01:31:19.720709       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1212 01:31:19.720723       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1212 01:31:19.720998       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1212 01:31:19.721010       1 controller.go:157] Shutting down quota evaluator
	I1212 01:31:19.721024       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.720329       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 01:31:19.720370       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 01:31:19.721756       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1212 01:31:19.721809       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 01:31:19.721859       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1212 01:31:19.721987       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1212 01:31:19.722545       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.722565       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.722573       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.722579       1 controller.go:176] quota evaluator worker shutdown
	I1212 01:31:19.723064       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1212 01:31:19.723086       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 01:31:19.723485       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1212 01:31:19.723944       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-apiserver [cbeb9a40d9bc0cae5f1aa3d47f98c3396f33696bb38292f53c134c7a98616d2c] <==
	I1212 01:31:36.056542       1 aggregator.go:171] initial CRD sync complete...
	I1212 01:31:36.056626       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 01:31:36.056659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 01:31:36.056783       1 cache.go:39] Caches are synced for autoregister controller
	I1212 01:31:36.070518       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 01:31:36.070850       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 01:31:36.070886       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 01:31:36.074600       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 01:31:36.080960       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 01:31:36.082271       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 01:31:36.104144       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 01:31:36.104336       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 01:31:36.109591       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 01:31:36.109656       1 policy_source.go:240] refreshing policies
	I1212 01:31:36.194930       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 01:31:36.704480       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 01:31:36.880089       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 01:31:37.608593       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.238]
	I1212 01:31:37.610621       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 01:31:37.617858       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 01:31:38.159335       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 01:31:38.249409       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 01:31:38.305070       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 01:31:38.325955       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 01:31:45.560299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1d3e750bb16101a2c6c1e1e26da2a42e150277735ca96d7e011941714c7a1c7f] <==
	I1212 01:31:15.401076       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 01:31:15.402591       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 01:31:15.405153       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 01:31:15.406767       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 01:31:15.409903       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:31:15.414331       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 01:31:15.415583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 01:31:15.415633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 01:31:15.418046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 01:31:15.421864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:31:15.421904       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:31:15.421914       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:31:15.426268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:31:15.429486       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 01:31:15.437314       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 01:31:15.437481       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 01:31:15.437516       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 01:31:15.437566       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 01:31:15.437642       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 01:31:15.438815       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 01:31:15.438914       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 01:31:15.440478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 01:31:15.450055       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 01:31:15.450271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:31:15.454126       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-controller-manager [8b829c5fe6f50c00fb1078b4319fb5f3738f1c22376b2262a5506588743ec92f] <==
	I1212 01:31:39.524946       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 01:31:39.525669       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 01:31:39.525947       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 01:31:39.525961       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 01:31:39.525969       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 01:31:39.528878       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:31:39.530795       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 01:31:39.530901       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 01:31:39.530975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-321955"
	I1212 01:31:39.531783       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 01:31:39.531978       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 01:31:39.534119       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 01:31:39.534524       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 01:31:39.540796       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 01:31:39.541398       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 01:31:39.541405       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 01:31:39.541654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 01:31:39.545873       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 01:31:39.546024       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 01:31:39.548478       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 01:31:39.550856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:31:39.560950       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:31:39.562557       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1212 01:31:39.563863       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 01:31:39.570591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [2e19dfd2c1741aec5773da9ab3700bdf853400aba90aaacb12eb3d653a975a6a] <==
	I1212 01:31:37.393998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:31:37.495778       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:31:37.495887       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.238"]
	E1212 01:31:37.495990       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:31:37.566661       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 01:31:37.566789       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:31:37.566851       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:31:37.581144       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:31:37.581672       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:31:37.581740       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:31:37.588155       1 config.go:200] "Starting service config controller"
	I1212 01:31:37.588169       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:31:37.588354       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:31:37.588373       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:31:37.588423       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:31:37.588439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:31:37.588968       1 config.go:309] "Starting node config controller"
	I1212 01:31:37.589007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:31:37.589023       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:31:37.688405       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:31:37.688504       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 01:31:37.688508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155] <==
	I1212 01:31:10.518685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:31:12.222142       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:31:12.222364       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.238"]
	E1212 01:31:12.222477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:31:12.310397       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 01:31:12.310521       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:31:12.310571       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:31:12.339663       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:31:12.341168       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:31:12.341423       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:31:12.360927       1 config.go:200] "Starting service config controller"
	I1212 01:31:12.361074       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:31:12.361104       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:31:12.361110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:31:12.361125       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:31:12.361130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:31:12.372851       1 config.go:309] "Starting node config controller"
	I1212 01:31:12.372897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:31:12.372907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:31:12.461493       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:31:12.461529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:31:12.461550       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [489b91a0c7a86cec49f1bb28236b4989845d0c1044f8b3b67c20182d9b335274] <==
	I1212 01:31:12.054798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 01:31:12.106422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:31:12.107083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 01:31:12.107576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 01:31:12.107821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 01:31:12.108017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 01:31:12.110893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 01:31:12.111334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 01:31:12.115833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 01:31:12.115950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 01:31:12.116027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 01:31:12.117512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 01:31:12.117747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 01:31:12.117950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 01:31:12.118058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 01:31:12.118070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 01:31:12.120581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 01:31:12.120813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1212 01:31:12.155417       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:29.919908       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 01:31:29.920384       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 01:31:29.920496       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 01:31:29.920611       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:29.920727       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1212 01:31:29.920772       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e8b513c3811f0ccc320fd8c70e65bff071664bf98de187d3c0c8dedc0acfd7cb] <==
	I1212 01:31:33.840490       1 serving.go:386] Generated self-signed cert in-memory
	W1212 01:31:35.948572       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 01:31:35.948789       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 01:31:35.948822       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 01:31:35.948914       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 01:31:36.046819       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 01:31:36.047656       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:31:36.056160       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 01:31:36.057034       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:36.059627       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:31:36.058068       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 01:31:36.160628       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.084357    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.132085    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-321955\" already exists" pod="kube-system/kube-scheduler-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.132259    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.142743    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-321955\" already exists" pod="kube-system/etcd-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.142778    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.158513    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-321955\" already exists" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.158749    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.169977    3975 kubelet_node_status.go:124] "Node was previously registered" node="pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.170101    3975 kubelet_node_status.go:78] "Successfully registered node" node="pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.170138    3975 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.173489    3975 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.174821    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-321955\" already exists" pod="kube-system/kube-controller-manager-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.282123    3975 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: E1212 01:31:36.294896    3975 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-321955\" already exists" pod="kube-system/kube-apiserver-pause-321955"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.658665    3975 apiserver.go:52] "Watching apiserver"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.687557    3975 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.697149    3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347a9cc8-e305-4635-be48-63cd11a20559-lib-modules\") pod \"kube-proxy-c7jlm\" (UID: \"347a9cc8-e305-4635-be48-63cd11a20559\") " pod="kube-system/kube-proxy-c7jlm"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.697396    3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347a9cc8-e305-4635-be48-63cd11a20559-xtables-lock\") pod \"kube-proxy-c7jlm\" (UID: \"347a9cc8-e305-4635-be48-63cd11a20559\") " pod="kube-system/kube-proxy-c7jlm"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.963804    3975 scope.go:117] "RemoveContainer" containerID="8f7ad41c54d44f160219d76f61163f0f6d1322bcd79a44bed28f98aff104caf7"
	Dec 12 01:31:36 pause-321955 kubelet[3975]: I1212 01:31:36.965756    3975 scope.go:117] "RemoveContainer" containerID="3b248ea8d9057ca54057c104df56d67fe1519490246c3a53aab533b082de1155"
	Dec 12 01:31:41 pause-321955 kubelet[3975]: E1212 01:31:41.831607    3975 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765503101830404612 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 12 01:31:41 pause-321955 kubelet[3975]: E1212 01:31:41.832171    3975 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765503101830404612 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 12 01:31:45 pause-321955 kubelet[3975]: I1212 01:31:45.506329    3975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 01:31:51 pause-321955 kubelet[3975]: E1212 01:31:51.836450    3975 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765503111833793146 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 12 01:31:51 pause-321955 kubelet[3975]: E1212 01:31:51.836473    3975 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765503111833793146 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-321955 -n pause-321955
helpers_test.go:270: (dbg) Run:  kubectl --context pause-321955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.24s)

                                                
                                    

Test pass (366/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.88
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.18
12 TestDownloadOnly/v1.34.2/json-events 3.39
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.18
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.04
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.18
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.7
31 TestOffline 87.78
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 422.2
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 34.7
45 TestAddons/parallel/RegistryCreds 0.81
47 TestAddons/parallel/InspektorGadget 12.29
48 TestAddons/parallel/MetricsServer 6.51
50 TestAddons/parallel/CSI 56.91
51 TestAddons/parallel/Headlamp 21.88
52 TestAddons/parallel/CloudSpanner 5.85
54 TestAddons/parallel/NvidiaDevicePlugin 6.61
57 TestAddons/StoppedEnableDisable 85.98
58 TestCertOptions 51.91
59 TestCertExpiration 260.08
61 TestForceSystemdFlag 67.18
62 TestForceSystemdEnv 67.45
67 TestErrorSpam/setup 44.35
68 TestErrorSpam/start 0.45
69 TestErrorSpam/status 0.85
70 TestErrorSpam/pause 2.01
71 TestErrorSpam/unpause 2.31
72 TestErrorSpam/stop 6.49
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 88.65
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 34.32
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
84 TestFunctional/serial/CacheCmd/cache/add_local 1.57
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
86 TestFunctional/serial/CacheCmd/cache/list 0.09
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
89 TestFunctional/serial/CacheCmd/cache/delete 0.17
90 TestFunctional/serial/MinikubeKubectlCmd 0.18
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 378.05
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.5
95 TestFunctional/serial/LogsFileCmd 1.54
96 TestFunctional/serial/InvalidService 3.97
98 TestFunctional/parallel/ConfigCmd 0.47
100 TestFunctional/parallel/DryRun 0.29
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 1.19
106 TestFunctional/parallel/ServiceCmdConnect 8.63
107 TestFunctional/parallel/AddonsCmd 0.19
108 TestFunctional/parallel/PersistentVolumeClaim 27.48
110 TestFunctional/parallel/SSHCmd 0.38
111 TestFunctional/parallel/CpCmd 1.27
112 TestFunctional/parallel/MySQL 102.96
113 TestFunctional/parallel/FileSync 0.18
114 TestFunctional/parallel/CertSync 1.64
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
122 TestFunctional/parallel/License 0.25
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
134 TestFunctional/parallel/ProfileCmd/profile_list 0.4
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
136 TestFunctional/parallel/MountCmd/any-port 6.94
137 TestFunctional/parallel/ServiceCmd/List 0.38
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
139 TestFunctional/parallel/MountCmd/specific-port 1.65
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
141 TestFunctional/parallel/ServiceCmd/Format 0.44
142 TestFunctional/parallel/ServiceCmd/URL 0.42
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
144 TestFunctional/parallel/Version/short 0.06
145 TestFunctional/parallel/Version/components 0.45
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.14
151 TestFunctional/parallel/ImageCommands/Setup 0.44
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.2
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.02
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 72.47
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 37.37
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.26
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.08
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.2
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.6
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 59.83
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.47
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.48
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.56
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.13
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.72
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 77.24
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.36
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.33
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 188.49
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.19
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.11
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.4
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.23
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.2
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.2
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.14
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.17
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.95
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.88
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.51
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.94
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.58
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.37
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.33
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.33
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 62.14
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.59
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.27
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.25
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.26
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 261.66
262 TestMultiControlPlane/serial/DeployApp 7.7
263 TestMultiControlPlane/serial/PingHostFromPods 1.73
264 TestMultiControlPlane/serial/AddWorkerNode 47.99
265 TestMultiControlPlane/serial/NodeLabels 0.09
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
267 TestMultiControlPlane/serial/CopyFile 13.3
268 TestMultiControlPlane/serial/StopSecondaryNode 82.48
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
270 TestMultiControlPlane/serial/RestartSecondaryNode 41.94
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.88
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.91
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
275 TestMultiControlPlane/serial/StopCluster 243.18
276 TestMultiControlPlane/serial/RestartCluster 93.68
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
278 TestMultiControlPlane/serial/AddSecondaryNode 82.02
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
284 TestJSONOutput/start/Command 89.54
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.8
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.7
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 8.37
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.25
312 TestMainNoArgs 0.07
313 TestMinikubeProfile 87.67
316 TestMountStart/serial/StartWithMountFirst 23.79
317 TestMountStart/serial/VerifyMountFirst 0.35
318 TestMountStart/serial/StartWithMountSecond 24.87
319 TestMountStart/serial/VerifyMountSecond 0.32
320 TestMountStart/serial/DeleteFirst 0.73
321 TestMountStart/serial/VerifyMountPostDelete 0.33
322 TestMountStart/serial/Stop 1.38
323 TestMountStart/serial/RestartStopped 21.41
324 TestMountStart/serial/VerifyMountPostStop 0.33
327 TestMultiNode/serial/FreshStart2Nodes 135.6
328 TestMultiNode/serial/DeployApp2Nodes 5.23
329 TestMultiNode/serial/PingHostFrom2Pods 1
330 TestMultiNode/serial/AddNode 46.75
331 TestMultiNode/serial/MultiNodeLabels 0.08
332 TestMultiNode/serial/ProfileList 0.49
333 TestMultiNode/serial/CopyFile 6.52
334 TestMultiNode/serial/StopNode 2.55
335 TestMultiNode/serial/StartAfterStop 44.67
336 TestMultiNode/serial/RestartKeepsNodes 341.23
337 TestMultiNode/serial/DeleteNode 2.8
338 TestMultiNode/serial/StopMultiNode 166.35
339 TestMultiNode/serial/RestartMultiNode 119.89
340 TestMultiNode/serial/ValidateNameConflict 44.72
347 TestScheduledStopUnix 112.69
351 TestRunningBinaryUpgrade 401.42
353 TestKubernetesUpgrade 179.24
355 TestStoppedBinaryUpgrade/Setup 0.55
356 TestStoppedBinaryUpgrade/Upgrade 167.95
357 TestISOImage/Setup 47.68
359 TestISOImage/Binaries/crictl 0.18
360 TestISOImage/Binaries/curl 0.2
361 TestISOImage/Binaries/docker 0.19
362 TestISOImage/Binaries/git 0.16
363 TestISOImage/Binaries/iptables 0.18
364 TestISOImage/Binaries/podman 0.18
365 TestISOImage/Binaries/rsync 0.17
366 TestISOImage/Binaries/socat 0.17
367 TestISOImage/Binaries/wget 0.17
368 TestISOImage/Binaries/VBoxControl 0.19
369 TestISOImage/Binaries/VBoxService 0.18
377 TestStoppedBinaryUpgrade/MinikubeLogs 1.6
379 TestPause/serial/Start 95.28
381 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
382 TestNoKubernetes/serial/StartWithK8s 64.74
390 TestNetworkPlugins/group/false 4.12
395 TestNoKubernetes/serial/StartWithStopK8s 33.09
396 TestNoKubernetes/serial/Start 23.78
397 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
398 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
399 TestNoKubernetes/serial/ProfileList 1.7
400 TestNoKubernetes/serial/Stop 1.46
401 TestNoKubernetes/serial/StartNoArgs 20.97
402 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
404 TestStartStop/group/old-k8s-version/serial/FirstStart 114.29
406 TestStartStop/group/no-preload/serial/FirstStart 74.02
408 TestStartStop/group/embed-certs/serial/FirstStart 95.65
409 TestStartStop/group/old-k8s-version/serial/DeployApp 10.36
410 TestStartStop/group/no-preload/serial/DeployApp 9.4
411 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
412 TestStartStop/group/old-k8s-version/serial/Stop 83.2
413 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
414 TestStartStop/group/no-preload/serial/Stop 76.7
415 TestStartStop/group/embed-certs/serial/DeployApp 10.34
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
417 TestStartStop/group/embed-certs/serial/Stop 84.25
419 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.16
420 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
421 TestStartStop/group/no-preload/serial/SecondStart 58.14
422 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
423 TestStartStop/group/old-k8s-version/serial/SecondStart 63.14
424 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
425 TestStartStop/group/embed-certs/serial/SecondStart 53.47
426 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.45
427 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.56
428 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 85.22
430 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.08
431 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
432 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
433 TestStartStop/group/no-preload/serial/Pause 3.02
435 TestStartStop/group/newest-cni/serial/FirstStart 49.07
436 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
437 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
438 TestStartStop/group/old-k8s-version/serial/Pause 3.28
439 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
440 TestNetworkPlugins/group/auto/Start 99.12
441 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
442 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
443 TestStartStop/group/embed-certs/serial/Pause 3.39
444 TestNetworkPlugins/group/kindnet/Start 77.66
445 TestStartStop/group/newest-cni/serial/DeployApp 0
446 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
447 TestStartStop/group/newest-cni/serial/Stop 88.27
448 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
449 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.16
450 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
451 TestNetworkPlugins/group/auto/KubeletFlags 0.19
452 TestNetworkPlugins/group/auto/NetCatPod 10.28
453 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
454 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
455 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
456 TestNetworkPlugins/group/auto/DNS 0.23
457 TestNetworkPlugins/group/auto/Localhost 0.15
458 TestNetworkPlugins/group/auto/HairPin 0.18
459 TestNetworkPlugins/group/kindnet/DNS 0.19
460 TestNetworkPlugins/group/kindnet/Localhost 0.16
461 TestNetworkPlugins/group/kindnet/HairPin 0.16
462 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
463 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
464 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.25
465 TestNetworkPlugins/group/calico/Start 91.9
466 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
467 TestStartStop/group/newest-cni/serial/SecondStart 56.9
468 TestNetworkPlugins/group/custom-flannel/Start 113.32
469 TestNetworkPlugins/group/enable-default-cni/Start 146.69
470 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
471 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
472 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
473 TestStartStop/group/newest-cni/serial/Pause 4.9
474 TestNetworkPlugins/group/flannel/Start 97.42
475 TestNetworkPlugins/group/calico/ControllerPod 6.02
476 TestNetworkPlugins/group/calico/KubeletFlags 0.26
477 TestNetworkPlugins/group/calico/NetCatPod 11.38
478 TestNetworkPlugins/group/calico/DNS 0.25
479 TestNetworkPlugins/group/calico/Localhost 0.21
480 TestNetworkPlugins/group/calico/HairPin 0.2
481 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
482 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.46
483 TestNetworkPlugins/group/bridge/Start 94.19
484 TestNetworkPlugins/group/custom-flannel/DNS 0.23
485 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
486 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
488 TestISOImage/PersistentMounts//data 0.23
489 TestISOImage/PersistentMounts//var/lib/docker 0.22
490 TestISOImage/PersistentMounts//var/lib/cni 0.21
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
492 TestISOImage/PersistentMounts//var/lib/minikube 0.2
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.22
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
495 TestISOImage/VersionJSON 0.2
496 TestISOImage/eBPFSupport 0.2
497 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
498 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
499 TestNetworkPlugins/group/flannel/ControllerPod 6.01
500 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
501 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
502 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
504 TestNetworkPlugins/group/flannel/NetCatPod 11.27
505 TestNetworkPlugins/group/flannel/DNS 0.21
506 TestNetworkPlugins/group/flannel/Localhost 0.17
507 TestNetworkPlugins/group/flannel/HairPin 0.17
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
509 TestNetworkPlugins/group/bridge/NetCatPod 9.3
510 TestNetworkPlugins/group/bridge/DNS 0.14
511 TestNetworkPlugins/group/bridge/Localhost 0.13
512 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (6.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-525167 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-525167 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.879130164s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1211 23:55:42.218307  190272 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1211 23:55:42.218440  190272 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-525167
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-525167: exit status 85 (86.937529ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-525167 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:35.398733  190284 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:35.399025  190284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:35.399039  190284 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:35.399044  190284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:35.399249  190284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	W1211 23:55:35.399391  190284 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22101-186349/.minikube/config/config.json: open /home/jenkins/minikube-integration/22101-186349/.minikube/config/config.json: no such file or directory
	I1211 23:55:35.399943  190284 out.go:368] Setting JSON to true
	I1211 23:55:35.400924  190284 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20279,"bootTime":1765477056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:35.400994  190284 start.go:143] virtualization: kvm guest
	I1211 23:55:35.405982  190284 out.go:99] [download-only-525167] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:35.406242  190284 notify.go:221] Checking for updates...
	W1211 23:55:35.406278  190284 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 23:55:35.408015  190284 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:35.410365  190284 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:35.412293  190284 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:35.414204  190284 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:35.415813  190284 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1211 23:55:35.419032  190284 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:55:35.419443  190284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:35.463856  190284 out.go:99] Using the kvm2 driver based on user configuration
	I1211 23:55:35.463910  190284 start.go:309] selected driver: kvm2
	I1211 23:55:35.463918  190284 start.go:927] validating driver "kvm2" against <nil>
	I1211 23:55:35.464369  190284 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:35.465080  190284 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1211 23:55:35.465239  190284 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:55:35.465283  190284 cni.go:84] Creating CNI manager for ""
	I1211 23:55:35.465344  190284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:55:35.465351  190284 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:35.465399  190284 start.go:353] cluster config:
	{Name:download-only-525167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-525167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:55:35.465732  190284 iso.go:125] acquiring lock: {Name:mkc8bf4754eb4f0261bb252fe2c8bf1a2bf2967f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:55:35.467866  190284 out.go:99] Downloading VM boot image ...
	I1211 23:55:35.467975  190284 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22101-186349/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1211 23:55:38.305015  190284 out.go:99] Starting "download-only-525167" primary control-plane node in "download-only-525167" cluster
	I1211 23:55:38.305072  190284 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1211 23:55:38.317820  190284 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1211 23:55:38.317875  190284 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:38.318123  190284 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1211 23:55:38.319979  190284 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1211 23:55:38.320013  190284 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1211 23:55:38.342347  190284 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1211 23:55:38.342501  190284 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-525167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-525167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-525167
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-449217 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-449217 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.387697948s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1211 23:55:46.078630  190272 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1211 23:55:46.078678  190272 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-449217
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-449217: exit status 85 (83.682407ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-525167 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-525167                                                                                                                                                 │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-449217 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:42.755156  190475 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:42.755479  190475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:42.755487  190475 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:42.755492  190475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:42.755746  190475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1211 23:55:42.756274  190475 out.go:368] Setting JSON to true
	I1211 23:55:42.757216  190475 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20287,"bootTime":1765477056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:42.757286  190475 start.go:143] virtualization: kvm guest
	I1211 23:55:42.759436  190475 out.go:99] [download-only-449217] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:42.759770  190475 notify.go:221] Checking for updates...
	I1211 23:55:42.761568  190475 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:42.763555  190475 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:42.765619  190475 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:42.767472  190475 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:42.769288  190475 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-449217 host does not exist
	  To start a cluster, run: "minikube start -p download-only-449217"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-449217
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-859495 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-859495 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.038793561s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1211 23:55:49.588006  190272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1211 23:55:49.588052  190272 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-859495
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-859495: exit status 85 (83.025536ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-525167 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-525167                                                                                                                                                        │ download-only-525167 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-449217 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-449217                                                                                                                                                        │ download-only-449217 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-859495 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-859495 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:46.609648  190636 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:46.609788  190636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:46.609798  190636 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:46.609804  190636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:46.610016  190636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1211 23:55:46.610590  190636 out.go:368] Setting JSON to true
	I1211 23:55:46.611483  190636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":20291,"bootTime":1765477056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:55:46.611553  190636 start.go:143] virtualization: kvm guest
	I1211 23:55:46.613529  190636 out.go:99] [download-only-859495] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1211 23:55:46.613841  190636 notify.go:221] Checking for updates...
	I1211 23:55:46.615503  190636 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:46.617283  190636 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:46.619012  190636 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1211 23:55:46.620794  190636 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1211 23:55:46.622255  190636 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-859495 host does not exist
	  To start a cluster, run: "minikube start -p download-only-859495"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-859495
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
I1211 23:55:50.546695  190272 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-928519 --alsologtostderr --binary-mirror http://127.0.0.1:46143 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-928519" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-928519
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestOffline (87.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-516505 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-516505 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.694566003s)
helpers_test.go:176: Cleaning up "offline-crio-516505" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-516505
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-516505: (1.084090405s)
--- PASS: TestOffline (87.78s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-081397
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-081397: exit status 85 (84.22353ms)

                                                
                                                
-- stdout --
	* Profile "addons-081397" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-081397"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-081397
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-081397: exit status 85 (93.904369ms)

                                                
                                                
-- stdout --
	* Profile "addons-081397" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-081397"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (422.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-081397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-081397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m2.201822797s)
--- PASS: TestAddons/Setup (422.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-081397 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-081397 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (34.7s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-081397 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-081397 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5fe0ee52-bebd-4a25-a44f-86b036a8dccc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5fe0ee52-bebd-4a25-a44f-86b036a8dccc] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 34.005892154s
addons_test.go:696: (dbg) Run:  kubectl --context addons-081397 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-081397 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-081397 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (34.70s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 8.676685ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-081397
addons_test.go:334: (dbg) Run:  kubectl --context addons-081397 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-tzblv" [90e742c1-9ec9-44a8-93e8-db124a2b2fc1] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.221140374s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable inspektor-gadget --alsologtostderr -v=1: (6.062527194s)
--- PASS: TestAddons/parallel/InspektorGadget (12.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 13.34455ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-zfsb8" [fd42d792-5bd0-449d-92f8-f0c0c74c4975] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.022002372s
addons_test.go:465: (dbg) Run:  kubectl --context addons-081397 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable metrics-server --alsologtostderr -v=1: (1.393463232s)
--- PASS: TestAddons/parallel/MetricsServer (6.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 00:03:37.584639  190272 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 00:03:37.595118  190272 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 00:03:37.595239  190272 kapi.go:107] duration metric: took 10.623431ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 10.64728ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-081397 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-081397 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [7db810ef-a5a1-4a31-b04e-7d32883f2ce8] Pending
helpers_test.go:353: "task-pv-pod" [7db810ef-a5a1-4a31-b04e-7d32883f2ce8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [7db810ef-a5a1-4a31-b04e-7d32883f2ce8] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004286958s
addons_test.go:574: (dbg) Run:  kubectl --context addons-081397 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-081397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-081397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-081397 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-081397 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-081397 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-081397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-081397 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [84f6ae84-e028-467a-96ee-6db62de1f5fc] Pending
helpers_test.go:353: "task-pv-pod-restore" [84f6ae84-e028-467a-96ee-6db62de1f5fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [84f6ae84-e028-467a-96ee-6db62de1f5fc] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.011079329s
addons_test.go:616: (dbg) Run:  kubectl --context addons-081397 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-081397 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-081397 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable volumesnapshots --alsologtostderr -v=1: (1.266766796s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.296160099s)
--- PASS: TestAddons/parallel/CSI (56.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-081397 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-081397 --alsologtostderr -v=1: (1.248348585s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-6k99t" [7a664ca3-7385-4a26-a705-17201ad211e7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-6k99t" [7a664ca3-7385-4a26-a705-17201ad211e7] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.006702181s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-081397 addons disable headlamp --alsologtostderr -v=1: (6.621923633s)
--- PASS: TestAddons/parallel/Headlamp (21.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-rlznc" [73c4629e-b87d-4d90-bcf2-4c4b3ca62b1c] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006070497s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-rbpjs" [22649f4f-f712-4939-86ae-d4e2f87acc0a] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006943011s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (85.98s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-081397
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-081397: (1m25.706820878s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-081397
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-081397
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-081397
--- PASS: TestAddons/StoppedEnableDisable (85.98s)

                                                
                                    
x
+
TestCertOptions (51.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-222511 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-222511 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.30624999s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-222511 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-222511 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-222511 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-222511" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-222511
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-222511: (1.075483373s)
--- PASS: TestCertOptions (51.91s)

                                                
                                    
x
+
TestCertExpiration (260.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-809349 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-809349 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (46.47402219s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-809349 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-809349 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.753767694s)
helpers_test.go:176: Cleaning up "cert-expiration-809349" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-809349
--- PASS: TestCertExpiration (260.08s)

                                                
                                    
x
+
TestForceSystemdFlag (67.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-910573 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-910573 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.564153855s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-910573 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-910573" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-910573
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-910573: (1.426082644s)
--- PASS: TestForceSystemdFlag (67.18s)

                                                
                                    
x
+
TestForceSystemdEnv (67.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-252926 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-252926 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.419744854s)
helpers_test.go:176: Cleaning up "force-systemd-env-252926" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-252926
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-252926: (1.032666016s)
--- PASS: TestForceSystemdEnv (67.45s)

                                                
                                    
x
+
TestErrorSpam/setup (44.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-350685 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-350685 --driver=kvm2  --container-runtime=crio
E1212 00:12:54.492010  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:54.498752  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:54.510428  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:54.532138  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:54.573777  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:54.655397  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:54.817219  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:55.139142  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:55.781389  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:57.062929  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:59.624543  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:13:04.745936  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:13:14.988542  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-350685 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-350685 --driver=kvm2  --container-runtime=crio: (44.347195353s)
--- PASS: TestErrorSpam/setup (44.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 start --dry-run
--- PASS: TestErrorSpam/start (0.45s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 pause
--- PASS: TestErrorSpam/pause (2.01s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 unpause
--- PASS: TestErrorSpam/unpause (2.31s)

                                                
                                    
x
+
TestErrorSpam/stop (6.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 stop: (2.612981791s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 stop: (2.075346871s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-350685 --log_dir /tmp/nospam-350685 stop: (1.801809335s)
--- PASS: TestErrorSpam/stop (6.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/test/nested/copy/190272/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843156 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1212 00:13:35.470671  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:14:16.432199  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-843156 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.645328817s)
--- PASS: TestFunctional/serial/StartWithProxy (88.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 00:15:01.494971  190272 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843156 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-843156 --alsologtostderr -v=8: (34.322017458s)
functional_test.go:678: soft start took 34.32303543s for "functional-843156" cluster.
I1212 00:15:35.817425  190272 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (34.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-843156 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 cache add registry.k8s.io/pause:3.1: (1.285995953s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cache add registry.k8s.io/pause:3.3
E1212 00:15:38.354209  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 cache add registry.k8s.io/pause:3.3: (1.317295023s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 cache add registry.k8s.io/pause:latest: (1.433047656s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-843156 /tmp/TestFunctionalserialCacheCmdcacheadd_local219467754/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cache add minikube-local-cache-test:functional-843156
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 cache add minikube-local-cache-test:functional-843156: (1.006149135s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cache delete minikube-local-cache-test:functional-843156
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-843156
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.532966ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 cache reload: (1.201100181s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 kubectl -- --context functional-843156 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-843156 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (378.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843156 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:17:54.492746  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:22.202581  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-843156 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m18.047696068s)
functional_test.go:776: restart took 6m18.047911411s for "functional-843156" cluster.
I1212 00:22:02.494844  190272 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (378.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-843156 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 logs: (1.499742897s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 logs --file /tmp/TestFunctionalserialLogsFileCmd188477600/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 logs --file /tmp/TestFunctionalserialLogsFileCmd188477600/001/logs.txt: (1.539579193s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-843156 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-843156
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-843156: exit status 115 (259.129811ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.201:30119 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-843156 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 config get cpus: exit status 14 (73.361823ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 config get cpus: exit status 14 (74.067286ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843156 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-843156 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.218117ms)

                                                
                                                
-- stdout --
	* [functional-843156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:22:20.101010  202845 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:22:20.101407  202845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:20.101423  202845 out.go:374] Setting ErrFile to fd 2...
	I1212 00:22:20.101429  202845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:20.101814  202845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:22:20.102567  202845 out.go:368] Setting JSON to false
	I1212 00:22:20.103868  202845 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21884,"bootTime":1765477056,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:22:20.103969  202845 start.go:143] virtualization: kvm guest
	I1212 00:22:20.107201  202845 out.go:179] * [functional-843156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:22:20.108823  202845 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:22:20.108837  202845 notify.go:221] Checking for updates...
	I1212 00:22:20.111406  202845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:22:20.112616  202845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:22:20.114089  202845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:22:20.115523  202845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:22:20.116981  202845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:22:20.118873  202845 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:22:20.119412  202845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:22:20.154411  202845 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 00:22:20.155716  202845 start.go:309] selected driver: kvm2
	I1212 00:22:20.155741  202845 start.go:927] validating driver "kvm2" against &{Name:functional-843156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-843156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:22:20.155912  202845 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:22:20.158438  202845 out.go:203] 
	W1212 00:22:20.159790  202845 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:22:20.161398  202845 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843156 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843156 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-843156 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.237248ms)

                                                
                                                
-- stdout --
	* [functional-843156] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:22:19.941364  202794 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:22:19.941720  202794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:19.941734  202794 out.go:374] Setting ErrFile to fd 2...
	I1212 00:22:19.941742  202794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:22:19.942261  202794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:22:19.942836  202794 out.go:368] Setting JSON to false
	I1212 00:22:19.943782  202794 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":21884,"bootTime":1765477056,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:22:19.943933  202794 start.go:143] virtualization: kvm guest
	I1212 00:22:19.945621  202794 out.go:179] * [functional-843156] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 00:22:19.947912  202794 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:22:19.947898  202794 notify.go:221] Checking for updates...
	I1212 00:22:19.949656  202794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:22:19.951282  202794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:22:19.952685  202794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:22:19.954042  202794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:22:19.955598  202794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:22:19.957509  202794 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:22:19.958301  202794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:22:20.009095  202794 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 00:22:20.012811  202794 start.go:309] selected driver: kvm2
	I1212 00:22:20.012835  202794 start.go:927] validating driver "kvm2" against &{Name:functional-843156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-843156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:22:20.012994  202794 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:22:20.015803  202794 out.go:203] 
	W1212 00:22:20.017441  202794 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:22:20.019119  202794 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-843156 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-843156 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-v2pcs" [502d0fce-f1bd-4bb8-a19c-3d14d8ff443b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-v2pcs" [502d0fce-f1bd-4bb8-a19c-3d14d8ff443b] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006598049s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.201:31991
functional_test.go:1680: http://192.168.39.201:31991: success! body:
Request served by hello-node-connect-7d85dfc575-v2pcs

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.201:31991
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b8b07950-5b67-4c6a-b247-adfff0295856] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005221496s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-843156 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-843156 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-843156 get pvc myclaim -o=json
I1212 00:22:16.024373  190272 retry.go:31] will retry after 1.654778603s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9a667744-ac1a-4616-8fee-59611bfef2e0 ResourceVersion:501 Generation:0 CreationTimestamp:2025-12-12 00:22:15 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc005428520 VolumeMode:0xc005428530 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-843156 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-843156 apply -f testdata/storage-provisioner/pod.yaml
I1212 00:22:17.903895  190272 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [97524539-4e1c-4427-99e9-510983ff46c9] Pending
helpers_test.go:353: "sp-pod" [97524539-4e1c-4427-99e9-510983ff46c9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [97524539-4e1c-4427-99e9-510983ff46c9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004411696s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-843156 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-843156 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-843156 apply -f testdata/storage-provisioner/pod.yaml
I1212 00:22:30.026678  190272 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f311db17-37c6-43c5-ba25-ae0112ca44d1] Pending
helpers_test.go:353: "sp-pod" [f311db17-37c6-43c5-ba25-ae0112ca44d1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005496502s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-843156 exec sp-pod -- ls /tmp/mount
E1212 00:22:54.483530  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh -n functional-843156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cp functional-843156:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3515762831/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh -n functional-843156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh -n functional-843156 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (102.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-843156 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-krnrx" [03389d4f-808d-4cd0-8294-bdb7818ea8cc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-krnrx" [03389d4f-808d-4cd0-8294-bdb7818ea8cc] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m36.006088525s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;": exit status 1 (271.433034ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:24:00.650045  190272 retry.go:31] will retry after 868.202467ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;": exit status 1 (177.644456ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:24:01.696886  190272 retry.go:31] will retry after 1.211221111s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;": exit status 1 (183.685392ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:24:03.093235  190272 retry.go:31] will retry after 1.260796805s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;": exit status 1 (163.450779ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:24:04.518429  190272 retry.go:31] will retry after 2.43548407s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843156 exec mysql-6bcdcbc558-krnrx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (102.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/190272/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /etc/test/nested/copy/190272/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/190272.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /etc/ssl/certs/190272.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/190272.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /usr/share/ca-certificates/190272.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1902722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /etc/ssl/certs/1902722.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1902722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /usr/share/ca-certificates/1902722.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-843156 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh "sudo systemctl is-active docker": exit status 1 (252.695987ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh "sudo systemctl is-active containerd": exit status 1 (356.813209ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-843156 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-843156 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-smtsr" [61053f06-f67e-4c9c-aad9-822bceb0a15b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-smtsr" [61053f06-f67e-4c9c-aad9-822bceb0a15b] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005141074s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "327.58834ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.480523ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "356.139797ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "70.208666ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdany-port1570690635/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765498932051440562" to /tmp/TestFunctionalparallelMountCmdany-port1570690635/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765498932051440562" to /tmp/TestFunctionalparallelMountCmdany-port1570690635/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765498932051440562" to /tmp/TestFunctionalparallelMountCmdany-port1570690635/001/test-1765498932051440562
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (168.042569ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:22:12.219924  190272 retry.go:31] will retry after 261.268138ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:22 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:22 test-1765498932051440562
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh cat /mount-9p/test-1765498932051440562
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-843156 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [bc79ceef-e1e5-4e84-8e50-08cddb4210cc] Pending
helpers_test.go:353: "busybox-mount" [bc79ceef-e1e5-4e84-8e50-08cddb4210cc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [bc79ceef-e1e5-4e84-8e50-08cddb4210cc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [bc79ceef-e1e5-4e84-8e50-08cddb4210cc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004250693s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-843156 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdany-port1570690635/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 service list -o json
functional_test.go:1504: Took "284.943469ms" to run "out/minikube-linux-amd64 -p functional-843156 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdspecific-port3022877136/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.767724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:22:19.223266  190272 retry.go:31] will retry after 483.189143ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdspecific-port3022877136/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh "sudo umount -f /mount-9p": exit status 1 (241.920437ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-843156 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdspecific-port3022877136/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.201:30150
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.201:30150
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup121380789/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup121380789/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup121380789/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T" /mount1: exit status 1 (326.50089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:22:20.972493  190272 retry.go:31] will retry after 626.388851ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-843156 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup121380789/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup121380789/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup121380789/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843156 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-843156
localhost/kicbase/echo-server:functional-843156
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843156 image ls --format short --alsologtostderr:
I1212 00:22:29.050214  203391 out.go:360] Setting OutFile to fd 1 ...
I1212 00:22:29.050379  203391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:29.050391  203391 out.go:374] Setting ErrFile to fd 2...
I1212 00:22:29.050396  203391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:29.050613  203391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:22:29.051200  203391 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:29.051297  203391 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:29.053795  203391 ssh_runner.go:195] Run: systemctl --version
I1212 00:22:29.056759  203391 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:29.057303  203391 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:29.057336  203391 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:29.057531  203391 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:29.150546  203391 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843156 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-843156  │ b787a4cfde38c │ 3.33kB │
│ localhost/my-image                      │ functional-843156  │ 80b101da8a105 │ 1.47MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-843156  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843156 image ls --format table --alsologtostderr:
I1212 00:22:32.868573  203492 out.go:360] Setting OutFile to fd 1 ...
I1212 00:22:32.868698  203492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:32.868710  203492 out.go:374] Setting ErrFile to fd 2...
I1212 00:22:32.868716  203492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:32.868963  203492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:22:32.869624  203492 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:32.869745  203492 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:32.872252  203492 ssh_runner.go:195] Run: systemctl --version
I1212 00:22:32.875083  203492 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:32.875578  203492 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:32.875610  203492 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:32.875778  203492 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:32.957840  203492 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843156 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d
4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f
7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"80b101da8a105b18e29cf8abd10c4e71bb52bb8962790600ae97de39a1055136","repoDigests":["localhost/my-image@sha256:eca5f659117c312589663d253b500e05919ffed70e847acee4625845ad4fb3e6"],"repoTags":["localhost/my-image:functional-843156"],"size":"1468600"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"]
,"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8
c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","
docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-843156"],"size":"4944818"},{"id":"4685030c4e9575bfc7140202f7d09e2612c88c4a60c3d2b4a3e21532adf0e50b","repoDigests":["docker.io/library/70def9554a2bcad9892215710fb7b41d184a779d59c8cb4c7aaf52998c138805-tmp@sha256:487d28d10fde89142031b1198f309b32e679dda0c6470a4174f05112cc2722d7"],"repoTags":[],"size":"1466017"},{"id":"b787a4cfde38c9a7594e9d54828d5a706412955193361c8035a18605d056f0cc","repoDigests":["localhost/minikube-local-cache-test@sha256:cb4bea9b570d8b540fcae155d70388d5ad49891eacf9f7c28f177f05c5edbd3e"],"repoTa
gs":["localhost/minikube-local-cache-test:functional-843156"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843156 image ls --format json --alsologtostderr:
I1212 00:22:32.673257  203481 out.go:360] Setting OutFile to fd 1 ...
I1212 00:22:32.673559  203481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:32.673570  203481 out.go:374] Setting ErrFile to fd 2...
I1212 00:22:32.673576  203481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:32.673788  203481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:22:32.674365  203481 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:32.674502  203481 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:32.676724  203481 ssh_runner.go:195] Run: systemctl --version
I1212 00:22:32.679641  203481 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:32.680110  203481 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:32.680147  203481 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:32.680327  203481 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:32.762201  203481 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843156 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: b787a4cfde38c9a7594e9d54828d5a706412955193361c8035a18605d056f0cc
repoDigests:
- localhost/minikube-local-cache-test@sha256:cb4bea9b570d8b540fcae155d70388d5ad49891eacf9f7c28f177f05c5edbd3e
repoTags:
- localhost/minikube-local-cache-test:functional-843156
size: "3330"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-843156
size: "4944818"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843156 image ls --format yaml --alsologtostderr:
I1212 00:22:29.280978  203412 out.go:360] Setting OutFile to fd 1 ...
I1212 00:22:29.281259  203412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:29.281270  203412 out.go:374] Setting ErrFile to fd 2...
I1212 00:22:29.281275  203412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:29.281552  203412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:22:29.282256  203412 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:29.282369  203412 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:29.284523  203412 ssh_runner.go:195] Run: systemctl --version
I1212 00:22:29.287016  203412 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:29.287425  203412 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:29.287453  203412 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:29.287707  203412 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:29.393411  203412 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843156 ssh pgrep buildkitd: exit status 1 (161.480026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image build -t localhost/my-image:functional-843156 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 image build -t localhost/my-image:functional-843156 testdata/build --alsologtostderr: (2.770244977s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843156 image build -t localhost/my-image:functional-843156 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4685030c4e9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-843156
--> 80b101da8a1
Successfully tagged localhost/my-image:functional-843156
80b101da8a105b18e29cf8abd10c4e71bb52bb8962790600ae97de39a1055136
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843156 image build -t localhost/my-image:functional-843156 testdata/build --alsologtostderr:
I1212 00:22:29.691744  203433 out.go:360] Setting OutFile to fd 1 ...
I1212 00:22:29.692058  203433 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:29.692071  203433 out.go:374] Setting ErrFile to fd 2...
I1212 00:22:29.692075  203433 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:29.692275  203433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:22:29.692876  203433 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:29.693628  203433 config.go:182] Loaded profile config "functional-843156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:22:29.695684  203433 ssh_runner.go:195] Run: systemctl --version
I1212 00:22:29.697886  203433 main.go:143] libmachine: domain functional-843156 has defined MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:29.698277  203433 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:80:58:a9", ip: ""} in network mk-functional-843156: {Iface:virbr1 ExpiryTime:2025-12-12 01:13:50 +0000 UTC Type:0 Mac:52:54:00:80:58:a9 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:functional-843156 Clientid:01:52:54:00:80:58:a9}
I1212 00:22:29.698299  203433 main.go:143] libmachine: domain functional-843156 has defined IP address 192.168.39.201 and MAC address 52:54:00:80:58:a9 in network mk-functional-843156
I1212 00:22:29.698422  203433 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-843156/id_rsa Username:docker}
I1212 00:22:29.777007  203433 build_images.go:162] Building image from path: /tmp/build.141369079.tar
I1212 00:22:29.777092  203433 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:22:29.792123  203433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.141369079.tar
I1212 00:22:29.798709  203433 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.141369079.tar: stat -c "%s %y" /var/lib/minikube/build/build.141369079.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.141369079.tar': No such file or directory
I1212 00:22:29.798759  203433 ssh_runner.go:362] scp /tmp/build.141369079.tar --> /var/lib/minikube/build/build.141369079.tar (3072 bytes)
I1212 00:22:29.853070  203433 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.141369079
I1212 00:22:29.872939  203433 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.141369079 -xf /var/lib/minikube/build/build.141369079.tar
I1212 00:22:29.891418  203433 crio.go:315] Building image: /var/lib/minikube/build/build.141369079
I1212 00:22:29.891618  203433 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-843156 /var/lib/minikube/build/build.141369079 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 00:22:32.363792  203433 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-843156 /var/lib/minikube/build/build.141369079 --cgroup-manager=cgroupfs: (2.472127289s)
I1212 00:22:32.363861  203433 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.141369079
I1212 00:22:32.380014  203433 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.141369079.tar
I1212 00:22:32.395306  203433 build_images.go:218] Built localhost/my-image:functional-843156 from /tmp/build.141369079.tar
I1212 00:22:32.395356  203433 build_images.go:134] succeeded building to: functional-843156
I1212 00:22:32.395362  203433 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-843156
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image load --daemon kicbase/echo-server:functional-843156 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-843156 image load --daemon kicbase/echo-server:functional-843156 --alsologtostderr: (1.960420394s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image load --daemon kicbase/echo-server:functional-843156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-843156
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image load --daemon kicbase/echo-server:functional-843156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image save kicbase/echo-server:functional-843156 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image rm kicbase/echo-server:functional-843156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-843156
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-843156 image save --daemon kicbase/echo-server:functional-843156 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-843156
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-843156
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-843156
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-843156
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-186349/.minikube/files/etc/test/nested/copy/190272/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-582645 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1212 00:27:54.483616  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-582645 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m12.46836683s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (37.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 00:28:35.967820  190272 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-582645 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-582645 --alsologtostderr -v=8: (37.371178335s)
functional_test.go:678: soft start took 37.371587995s for "functional-582645" cluster.
I1212 00:29:13.339441  190272 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (37.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-582645 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 cache add registry.k8s.io/pause:3.1: (1.032216004s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 cache add registry.k8s.io/pause:3.3: (1.103730333s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 cache add registry.k8s.io/pause:latest: (1.12412437s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach254606265/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cache add minikube-local-cache-test:functional-582645
E1212 00:29:17.564473  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cache delete minikube-local-cache-test:functional-582645
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-582645
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (186.005424ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 kubectl -- --context functional-582645 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-582645 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-582645 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-582645 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.831865391s)
functional_test.go:776: restart took 59.832028556s for "functional-582645" cluster.
I1212 00:30:19.988414  190272 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-582645 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 logs: (1.468468656s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs572037133/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs572037133/001/logs.txt: (1.479188196s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-582645 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-582645
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-582645: exit status 115 (252.648931ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.189:32284 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-582645 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-582645 delete -f testdata/invalidsvc.yaml: (1.102204429s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 config get cpus: exit status 14 (65.727075ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 config get cpus: exit status 14 (72.485611ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (118.252008ms)

                                                
                                                
-- stdout --
	* [functional-582645] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:31:41.064009  207156 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:31:41.064122  207156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:41.064132  207156 out.go:374] Setting ErrFile to fd 2...
	I1212 00:31:41.064140  207156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:41.064357  207156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:31:41.064877  207156 out.go:368] Setting JSON to false
	I1212 00:31:41.065746  207156 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":22445,"bootTime":1765477056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:31:41.065812  207156 start.go:143] virtualization: kvm guest
	I1212 00:31:41.067673  207156 out.go:179] * [functional-582645] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 00:31:41.069170  207156 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:31:41.069207  207156 notify.go:221] Checking for updates...
	I1212 00:31:41.072287  207156 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:31:41.074067  207156 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:31:41.075358  207156 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:31:41.076587  207156 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:31:41.077856  207156 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:31:41.079363  207156 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:31:41.079883  207156 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:31:41.112941  207156 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 00:31:41.114020  207156 start.go:309] selected driver: kvm2
	I1212 00:31:41.114037  207156 start.go:927] validating driver "kvm2" against &{Name:functional-582645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-582645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:41.114154  207156 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:31:41.116323  207156 out.go:203] 
	W1212 00:31:41.117467  207156 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:31:41.118759  207156 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-582645 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-582645 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (133.965727ms)

                                                
                                                
-- stdout --
	* [functional-582645] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:31:40.937290  207140 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:31:40.937415  207140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:40.937420  207140 out.go:374] Setting ErrFile to fd 2...
	I1212 00:31:40.937425  207140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:40.938097  207140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:31:40.938983  207140 out.go:368] Setting JSON to false
	I1212 00:31:40.940273  207140 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":22445,"bootTime":1765477056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:31:40.940354  207140 start.go:143] virtualization: kvm guest
	I1212 00:31:40.942498  207140 out.go:179] * [functional-582645] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 00:31:40.944070  207140 notify.go:221] Checking for updates...
	I1212 00:31:40.944139  207140 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:31:40.945512  207140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:31:40.946857  207140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 00:31:40.948863  207140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 00:31:40.950294  207140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:31:40.951963  207140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:31:40.954121  207140 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:31:40.955100  207140 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:31:40.993636  207140 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 00:31:40.994837  207140 start.go:309] selected driver: kvm2
	I1212 00:31:40.994854  207140 start.go:927] validating driver "kvm2" against &{Name:functional-582645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-582645 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:40.995035  207140 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:31:40.996987  207140 out.go:203] 
	W1212 00:31:40.998344  207140 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:31:40.999522  207140 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (77.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e0d404c6-fa24-4c4f-91b0-edce445b5ce0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004079562s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-582645 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-582645 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-582645 get pvc myclaim -o=json
I1212 00:30:34.407879  190272 retry.go:31] will retry after 1.626535227s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2fe2782b-4cf9-4259-8930-88b1a36cf8e5 ResourceVersion:702 Generation:0 CreationTimestamp:2025-12-12 00:30:34 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-2fe2782b-4cf9-4259-8930-88b1a36cf8e5 StorageClassName:0xc000969000 VolumeMode:0xc000969010 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-582645 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-582645 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [95ded433-3fae-49a7-a341-fcae48ef5706] Pending
helpers_test.go:353: "sp-pod" [95ded433-3fae-49a7-a341-fcae48ef5706] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [95ded433-3fae-49a7-a341-fcae48ef5706] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m2.004427343s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-582645 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-582645 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-582645 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2fab52d3-dfee-4229-9795-73be192e28a5] Pending
helpers_test.go:353: "sp-pod" [2fab52d3-dfee-4229-9795-73be192e28a5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004649757s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-582645 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (77.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh -n functional-582645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cp functional-582645:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1112335916/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh -n functional-582645 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh -n functional-582645 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (188.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-582645 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-h5j57" [a4b87bdb-0d23-42fa-a5dd-74911a6e1c31] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-h5j57" [a4b87bdb-0d23-42fa-a5dd-74911a6e1c31] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 2m56.233096055s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;": exit status 1 (186.816407ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:34:42.411357  190272 retry.go:31] will retry after 557.69998ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;": exit status 1 (218.01878ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:34:43.187498  190272 retry.go:31] will retry after 890.732259ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;": exit status 1 (204.255913ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:34:44.283510  190272 retry.go:31] will retry after 2.763051113s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;": exit status 1 (135.641181ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:34:47.183306  190272 retry.go:31] will retry after 2.53297798s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;": exit status 1 (143.691753ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 00:34:49.860643  190272 retry.go:31] will retry after 4.271926839s: exit status 1
E1212 00:34:53.551066  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-582645 exec mysql-7d7b65bc95-h5j57 -- mysql -ppassword -e "show databases;"
E1212 00:37:09.687260  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:37:37.392491  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:37:54.483847  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (188.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/190272/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /etc/test/nested/copy/190272/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/190272.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /etc/ssl/certs/190272.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/190272.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /usr/share/ca-certificates/190272.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1902722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /etc/ssl/certs/1902722.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1902722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /usr/share/ca-certificates/1902722.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-582645 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh "sudo systemctl is-active docker": exit status 1 (204.681752ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh "sudo systemctl is-active containerd": exit status 1 (199.020541ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-582645 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-582645
localhost/kicbase/echo-server:functional-582645
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-582645 image ls --format short --alsologtostderr:
I1212 00:31:46.280101  207394 out.go:360] Setting OutFile to fd 1 ...
I1212 00:31:46.280223  207394 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:46.280235  207394 out.go:374] Setting ErrFile to fd 2...
I1212 00:31:46.280242  207394 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:46.280445  207394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:31:46.281000  207394 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:46.281096  207394 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:46.283574  207394 ssh_runner.go:195] Run: systemctl --version
I1212 00:31:46.286231  207394 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:46.286699  207394 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:46.286728  207394 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:46.286889  207394 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:46.370353  207394 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-582645 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-582645  │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-582645  │ b787a4cfde38c │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-582645  │ b12352dd484e0 │ 1.47MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-582645 image ls --format table --alsologtostderr:
I1212 00:31:50.085343  207476 out.go:360] Setting OutFile to fd 1 ...
I1212 00:31:50.085656  207476 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:50.085667  207476 out.go:374] Setting ErrFile to fd 2...
I1212 00:31:50.085670  207476 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:50.085870  207476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:31:50.086539  207476 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:50.086629  207476 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:50.089047  207476 ssh_runner.go:195] Run: systemctl --version
I1212 00:31:50.091680  207476 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:50.092294  207476 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:50.092335  207476 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:50.092535  207476 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:50.174207  207476 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-582645 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"b12352dd484e0bbacde962142ae64974b3708d86a7a3d5a593b1e8cd5867f883","repoDigests":["localhost/my-image@sha256:feb665f2b0bb50620be94ad9d2a0ec15b3e0a8d3c377a8dbe5b187fb5d138002"],"repoTags":["localhost/my-image:functional-582645"],"size":"1468598"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c7
6a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"aa5e3ebc0
dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-582645"],"size":"4943877"},{"id":"45f3cc72d235
f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e
7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"8034c602687e4cf7b8900f72993e49c6a040e39c384a17d329fb617a592e6b05","repoDigests":["docker.io/library/ed5b21f25eb21cc928c3045ebedcec2c67a71bc98c20e4425d9cfcc230abb086-tmp@sha256:2b4add3bb673bebc8e7913be804a17bbd1093317f0bfeff3f2fbf1556d514a24"],"repoTags":[],"size":"1466017"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b787a4cfde38c9a7594e9d54828d5a706412955193361c8035a18605d056f0cc","repoDigests":["localhost/minikube-local-cac
he-test@sha256:cb4bea9b570d8b540fcae155d70388d5ad49891eacf9f7c28f177f05c5edbd3e"],"repoTags":["localhost/minikube-local-cache-test:functional-582645"],"size":"3330"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@s
ha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-582645 image ls --format json --alsologtostderr:
I1212 00:31:49.888442  207465 out.go:360] Setting OutFile to fd 1 ...
I1212 00:31:49.888705  207465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:49.888721  207465 out.go:374] Setting ErrFile to fd 2...
I1212 00:31:49.888726  207465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:49.888926  207465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:31:49.889528  207465 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:49.889626  207465 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:49.891965  207465 ssh_runner.go:195] Run: systemctl --version
I1212 00:31:49.894705  207465 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:49.895184  207465 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:49.895208  207465 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:49.895395  207465 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:49.978783  207465 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-582645 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-582645
size: "4943877"
- id: b787a4cfde38c9a7594e9d54828d5a706412955193361c8035a18605d056f0cc
repoDigests:
- localhost/minikube-local-cache-test@sha256:cb4bea9b570d8b540fcae155d70388d5ad49891eacf9f7c28f177f05c5edbd3e
repoTags:
- localhost/minikube-local-cache-test:functional-582645
size: "3330"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-582645 image ls --format yaml --alsologtostderr:
I1212 00:31:46.532270  207421 out.go:360] Setting OutFile to fd 1 ...
I1212 00:31:46.532600  207421 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:46.532613  207421 out.go:374] Setting ErrFile to fd 2...
I1212 00:31:46.532617  207421 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:46.532830  207421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:31:46.533429  207421 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:46.533543  207421 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:46.535710  207421 ssh_runner.go:195] Run: systemctl --version
I1212 00:31:46.538105  207421 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:46.538598  207421 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:46.538631  207421 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:46.538814  207421 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:46.635410  207421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh pgrep buildkitd: exit status 1 (166.401708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image build -t localhost/my-image:functional-582645 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 image build -t localhost/my-image:functional-582645 testdata/build --alsologtostderr: (2.750762113s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-582645 image build -t localhost/my-image:functional-582645 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8034c602687
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-582645
--> b12352dd484
Successfully tagged localhost/my-image:functional-582645
b12352dd484e0bbacde962142ae64974b3708d86a7a3d5a593b1e8cd5867f883
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-582645 image build -t localhost/my-image:functional-582645 testdata/build --alsologtostderr:
I1212 00:31:46.913826  207443 out.go:360] Setting OutFile to fd 1 ...
I1212 00:31:46.914101  207443 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:46.914112  207443 out.go:374] Setting ErrFile to fd 2...
I1212 00:31:46.914116  207443 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:31:46.914314  207443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
I1212 00:31:46.914909  207443 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:46.915599  207443 config.go:182] Loaded profile config "functional-582645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:31:46.918094  207443 ssh_runner.go:195] Run: systemctl --version
I1212 00:31:46.920718  207443 main.go:143] libmachine: domain functional-582645 has defined MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:46.921173  207443 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f5:68:d8", ip: ""} in network mk-functional-582645: {Iface:virbr1 ExpiryTime:2025-12-12 01:27:40 +0000 UTC Type:0 Mac:52:54:00:f5:68:d8 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-582645 Clientid:01:52:54:00:f5:68:d8}
I1212 00:31:46.921202  207443 main.go:143] libmachine: domain functional-582645 has defined IP address 192.168.39.189 and MAC address 52:54:00:f5:68:d8 in network mk-functional-582645
I1212 00:31:46.921369  207443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/functional-582645/id_rsa Username:docker}
I1212 00:31:47.002359  207443 build_images.go:162] Building image from path: /tmp/build.1840160893.tar
I1212 00:31:47.002427  207443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:31:47.017567  207443 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1840160893.tar
I1212 00:31:47.023414  207443 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1840160893.tar: stat -c "%s %y" /var/lib/minikube/build/build.1840160893.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1840160893.tar': No such file or directory
I1212 00:31:47.023471  207443 ssh_runner.go:362] scp /tmp/build.1840160893.tar --> /var/lib/minikube/build/build.1840160893.tar (3072 bytes)
I1212 00:31:47.062835  207443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1840160893
I1212 00:31:47.078269  207443 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1840160893 -xf /var/lib/minikube/build/build.1840160893.tar
I1212 00:31:47.095061  207443 crio.go:315] Building image: /var/lib/minikube/build/build.1840160893
I1212 00:31:47.095197  207443 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-582645 /var/lib/minikube/build/build.1840160893 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 00:31:49.566268  207443 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-582645 /var/lib/minikube/build/build.1840160893 --cgroup-manager=cgroupfs: (2.471036067s)
I1212 00:31:49.566356  207443 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1840160893
I1212 00:31:49.582015  207443 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1840160893.tar
I1212 00:31:49.595554  207443 build_images.go:218] Built localhost/my-image:functional-582645 from /tmp/build.1840160893.tar
I1212 00:31:49.595601  207443 build_images.go:134] succeeded building to: functional-582645
I1212 00:31:49.595607  207443 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-582645
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image load --daemon kicbase/echo-server:functional-582645 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 image load --daemon kicbase/echo-server:functional-582645 --alsologtostderr: (1.701110688s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image load --daemon kicbase/echo-server:functional-582645 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-582645
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image load --daemon kicbase/echo-server:functional-582645 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image save kicbase/echo-server:functional-582645 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image rm kicbase/echo-server:functional-582645 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-582645
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 image save --daemon kicbase/echo-server:functional-582645 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-582645
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "257.102995ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.885114ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "261.151355ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "70.693617ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (62.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1887540447/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765499435163221214" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1887540447/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765499435163221214" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1887540447/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765499435163221214" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1887540447/001/test-1765499435163221214
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (159.194883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:30:35.322764  190272 retry.go:31] will retry after 624.152576ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh -- ls -la /mount-9p
I1212 00:30:36.234175  190272 detect.go:223] nested VM detected
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:30 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:30 test-1765499435163221214
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh cat /mount-9p/test-1765499435163221214
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-582645 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [76636df6-0816-48f2-bff8-d27f5aa9f041] Pending
helpers_test.go:353: "busybox-mount" [76636df6-0816-48f2-bff8-d27f5aa9f041] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [76636df6-0816-48f2-bff8-d27f5aa9f041] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [76636df6-0816-48f2-bff8-d27f5aa9f041] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m0.003751205s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-582645 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1887540447/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (62.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1839879261/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (163.515976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:31:37.462368  190272 retry.go:31] will retry after 660.512833ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1839879261/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh "sudo umount -f /mount-9p": exit status 1 (199.707888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-582645 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1839879261/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T" /mount1: exit status 1 (203.602241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:31:39.092025  190272 retry.go:31] will retry after 473.336424ms: exit status 1
I1212 00:31:39.230339  190272 detect.go:223] nested VM detected
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-582645 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-582645 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3378403719/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 version -o=json --components
E1212 00:32:09.687506  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:09.694054  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:09.705612  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:09.727239  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:09.768868  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:09.850846  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:10.012487  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:10.334384  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:10.976131  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:12.258576  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:14.820985  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:19.942363  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:30.184075  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:50.666418  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:32:54.483480  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:33:31.629001  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 service list: (1.248820381s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-582645 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-582645 service list -o json: (1.258257041s)
functional_test.go:1504: Took "1.25841101s" to run "out/minikube-linux-amd64 -p functional-582645 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-582645
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-582645
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-582645
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (261.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1212 00:42:09.688452  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:42:54.483295  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m20.983855025s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (261.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 kubectl -- rollout status deployment/busybox: (4.589237572s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-4hqx9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-hrcjm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-kqwzj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-4hqx9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-hrcjm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-kqwzj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-4hqx9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-hrcjm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-kqwzj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-4hqx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-4hqx9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-hrcjm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-hrcjm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-kqwzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 kubectl -- exec busybox-7b57f96db7-kqwzj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node add --alsologtostderr -v 5
E1212 00:45:28.129349  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.135913  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.147502  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.169073  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.210424  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.292065  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.454060  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:28.775905  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:29.417965  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:30.700174  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:33.262644  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:38.384212  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:48.626109  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 node add --alsologtostderr -v 5: (47.155712798s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-313450 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp testdata/cp-test.txt ha-313450:/home/docker/cp-test.txt
E1212 00:45:57.566030  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1011778458/001/cp-test_ha-313450.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450:/home/docker/cp-test.txt ha-313450-m02:/home/docker/cp-test_ha-313450_ha-313450-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test_ha-313450_ha-313450-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450:/home/docker/cp-test.txt ha-313450-m03:/home/docker/cp-test_ha-313450_ha-313450-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test_ha-313450_ha-313450-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450:/home/docker/cp-test.txt ha-313450-m04:/home/docker/cp-test_ha-313450_ha-313450-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test_ha-313450_ha-313450-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp testdata/cp-test.txt ha-313450-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1011778458/001/cp-test_ha-313450-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m02:/home/docker/cp-test.txt ha-313450:/home/docker/cp-test_ha-313450-m02_ha-313450.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test_ha-313450-m02_ha-313450.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m02:/home/docker/cp-test.txt ha-313450-m03:/home/docker/cp-test_ha-313450-m02_ha-313450-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test_ha-313450-m02_ha-313450-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m02:/home/docker/cp-test.txt ha-313450-m04:/home/docker/cp-test_ha-313450-m02_ha-313450-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test_ha-313450-m02_ha-313450-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp testdata/cp-test.txt ha-313450-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1011778458/001/cp-test_ha-313450-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m03:/home/docker/cp-test.txt ha-313450:/home/docker/cp-test_ha-313450-m03_ha-313450.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test_ha-313450-m03_ha-313450.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m03:/home/docker/cp-test.txt ha-313450-m02:/home/docker/cp-test_ha-313450-m03_ha-313450-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test_ha-313450-m03_ha-313450-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m03:/home/docker/cp-test.txt ha-313450-m04:/home/docker/cp-test_ha-313450-m03_ha-313450-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test_ha-313450-m03_ha-313450-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp testdata/cp-test.txt ha-313450-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1011778458/001/cp-test_ha-313450-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m04:/home/docker/cp-test.txt ha-313450:/home/docker/cp-test_ha-313450-m04_ha-313450.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450 "sudo cat /home/docker/cp-test_ha-313450-m04_ha-313450.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m04:/home/docker/cp-test.txt ha-313450-m02:/home/docker/cp-test_ha-313450-m04_ha-313450-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m02 "sudo cat /home/docker/cp-test_ha-313450-m04_ha-313450-m02.txt"
E1212 00:46:09.108332  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 cp ha-313450-m04:/home/docker/cp-test.txt ha-313450-m03:/home/docker/cp-test_ha-313450-m04_ha-313450-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 ssh -n ha-313450-m03 "sudo cat /home/docker/cp-test_ha-313450-m04_ha-313450-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node stop m02 --alsologtostderr -v 5
E1212 00:46:50.070760  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:47:09.688395  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 node stop m02 --alsologtostderr -v 5: (1m21.823141596s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5: exit status 7 (657.051614ms)

                                                
                                                
-- stdout --
	ha-313450
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313450-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-313450-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313450-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:47:31.757213  212781 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:47:31.757422  212781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:47:31.757432  212781 out.go:374] Setting ErrFile to fd 2...
	I1212 00:47:31.757439  212781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:47:31.757753  212781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:47:31.757963  212781 out.go:368] Setting JSON to false
	I1212 00:47:31.757995  212781 mustload.go:66] Loading cluster: ha-313450
	I1212 00:47:31.758170  212781 notify.go:221] Checking for updates...
	I1212 00:47:31.758397  212781 config.go:182] Loaded profile config "ha-313450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:47:31.758417  212781 status.go:174] checking status of ha-313450 ...
	I1212 00:47:31.761858  212781 status.go:371] ha-313450 host status = "Running" (err=<nil>)
	I1212 00:47:31.761890  212781 host.go:66] Checking if "ha-313450" exists ...
	I1212 00:47:31.766617  212781 main.go:143] libmachine: domain ha-313450 has defined MAC address 52:54:00:c2:ae:0d in network mk-ha-313450
	I1212 00:47:31.767634  212781 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:ae:0d", ip: ""} in network mk-ha-313450: {Iface:virbr1 ExpiryTime:2025-12-12 01:40:52 +0000 UTC Type:0 Mac:52:54:00:c2:ae:0d Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-313450 Clientid:01:52:54:00:c2:ae:0d}
	I1212 00:47:31.767694  212781 main.go:143] libmachine: domain ha-313450 has defined IP address 192.168.39.161 and MAC address 52:54:00:c2:ae:0d in network mk-ha-313450
	I1212 00:47:31.767972  212781 host.go:66] Checking if "ha-313450" exists ...
	I1212 00:47:31.768354  212781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:47:31.772726  212781 main.go:143] libmachine: domain ha-313450 has defined MAC address 52:54:00:c2:ae:0d in network mk-ha-313450
	I1212 00:47:31.773613  212781 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:ae:0d", ip: ""} in network mk-ha-313450: {Iface:virbr1 ExpiryTime:2025-12-12 01:40:52 +0000 UTC Type:0 Mac:52:54:00:c2:ae:0d Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-313450 Clientid:01:52:54:00:c2:ae:0d}
	I1212 00:47:31.773653  212781 main.go:143] libmachine: domain ha-313450 has defined IP address 192.168.39.161 and MAC address 52:54:00:c2:ae:0d in network mk-ha-313450
	I1212 00:47:31.773875  212781 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/ha-313450/id_rsa Username:docker}
	I1212 00:47:31.873263  212781 ssh_runner.go:195] Run: systemctl --version
	I1212 00:47:31.884037  212781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:47:31.917954  212781 kubeconfig.go:125] found "ha-313450" server: "https://192.168.39.254:8443"
	I1212 00:47:31.918053  212781 api_server.go:166] Checking apiserver status ...
	I1212 00:47:31.918127  212781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:47:31.951284  212781 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1451/cgroup
	W1212 00:47:31.976410  212781 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1451/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:47:31.976530  212781 ssh_runner.go:195] Run: ls
	I1212 00:47:31.984374  212781 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1212 00:47:31.991115  212781 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1212 00:47:31.991184  212781 status.go:463] ha-313450 apiserver status = Running (err=<nil>)
	I1212 00:47:31.991207  212781 status.go:176] ha-313450 status: &{Name:ha-313450 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:47:31.991243  212781 status.go:174] checking status of ha-313450-m02 ...
	I1212 00:47:31.993899  212781 status.go:371] ha-313450-m02 host status = "Stopped" (err=<nil>)
	I1212 00:47:31.993934  212781 status.go:384] host is not running, skipping remaining checks
	I1212 00:47:31.993941  212781 status.go:176] ha-313450-m02 status: &{Name:ha-313450-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:47:31.993977  212781 status.go:174] checking status of ha-313450-m03 ...
	I1212 00:47:31.995373  212781 status.go:371] ha-313450-m03 host status = "Running" (err=<nil>)
	I1212 00:47:31.995395  212781 host.go:66] Checking if "ha-313450-m03" exists ...
	I1212 00:47:31.999073  212781 main.go:143] libmachine: domain ha-313450-m03 has defined MAC address 52:54:00:f9:d3:6c in network mk-ha-313450
	I1212 00:47:31.999828  212781 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:d3:6c", ip: ""} in network mk-ha-313450: {Iface:virbr1 ExpiryTime:2025-12-12 01:43:08 +0000 UTC Type:0 Mac:52:54:00:f9:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-313450-m03 Clientid:01:52:54:00:f9:d3:6c}
	I1212 00:47:31.999889  212781 main.go:143] libmachine: domain ha-313450-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:f9:d3:6c in network mk-ha-313450
	I1212 00:47:32.000136  212781 host.go:66] Checking if "ha-313450-m03" exists ...
	I1212 00:47:32.000512  212781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:47:32.003654  212781 main.go:143] libmachine: domain ha-313450-m03 has defined MAC address 52:54:00:f9:d3:6c in network mk-ha-313450
	I1212 00:47:32.004358  212781 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:d3:6c", ip: ""} in network mk-ha-313450: {Iface:virbr1 ExpiryTime:2025-12-12 01:43:08 +0000 UTC Type:0 Mac:52:54:00:f9:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-313450-m03 Clientid:01:52:54:00:f9:d3:6c}
	I1212 00:47:32.004393  212781 main.go:143] libmachine: domain ha-313450-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:f9:d3:6c in network mk-ha-313450
	I1212 00:47:32.004686  212781 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/ha-313450-m03/id_rsa Username:docker}
	I1212 00:47:32.107077  212781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:47:32.135123  212781 kubeconfig.go:125] found "ha-313450" server: "https://192.168.39.254:8443"
	I1212 00:47:32.135167  212781 api_server.go:166] Checking apiserver status ...
	I1212 00:47:32.135236  212781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:47:32.164929  212781 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1844/cgroup
	W1212 00:47:32.179268  212781 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1844/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:47:32.179362  212781 ssh_runner.go:195] Run: ls
	I1212 00:47:32.185779  212781 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1212 00:47:32.194656  212781 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1212 00:47:32.194694  212781 status.go:463] ha-313450-m03 apiserver status = Running (err=<nil>)
	I1212 00:47:32.194706  212781 status.go:176] ha-313450-m03 status: &{Name:ha-313450-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:47:32.194730  212781 status.go:174] checking status of ha-313450-m04 ...
	I1212 00:47:32.197230  212781 status.go:371] ha-313450-m04 host status = "Running" (err=<nil>)
	I1212 00:47:32.197284  212781 host.go:66] Checking if "ha-313450-m04" exists ...
	I1212 00:47:32.201343  212781 main.go:143] libmachine: domain ha-313450-m04 has defined MAC address 52:54:00:8b:fc:55 in network mk-ha-313450
	I1212 00:47:32.202132  212781 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fc:55", ip: ""} in network mk-ha-313450: {Iface:virbr1 ExpiryTime:2025-12-12 01:45:25 +0000 UTC Type:0 Mac:52:54:00:8b:fc:55 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313450-m04 Clientid:01:52:54:00:8b:fc:55}
	I1212 00:47:32.202166  212781 main.go:143] libmachine: domain ha-313450-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:8b:fc:55 in network mk-ha-313450
	I1212 00:47:32.202324  212781 host.go:66] Checking if "ha-313450-m04" exists ...
	I1212 00:47:32.202596  212781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:47:32.205797  212781 main.go:143] libmachine: domain ha-313450-m04 has defined MAC address 52:54:00:8b:fc:55 in network mk-ha-313450
	I1212 00:47:32.206303  212781 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:fc:55", ip: ""} in network mk-ha-313450: {Iface:virbr1 ExpiryTime:2025-12-12 01:45:25 +0000 UTC Type:0 Mac:52:54:00:8b:fc:55 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313450-m04 Clientid:01:52:54:00:8b:fc:55}
	I1212 00:47:32.206328  212781 main.go:143] libmachine: domain ha-313450-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:8b:fc:55 in network mk-ha-313450
	I1212 00:47:32.206707  212781 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/ha-313450-m04/id_rsa Username:docker}
	I1212 00:47:32.307297  212781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:47:32.337574  212781 status.go:176] ha-313450-m04 status: &{Name:ha-313450-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node start m02 --alsologtostderr -v 5
E1212 00:47:54.483565  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:48:11.992325  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 node start m02 --alsologtostderr -v 5: (40.702054808s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5: (1.092606443s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.054139346s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 stop --alsologtostderr -v 5
E1212 00:48:32.754175  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:50:28.130024  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:50:55.834879  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:52:09.687646  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 stop --alsologtostderr -v 5: (4m13.744774368s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 start --wait true --alsologtostderr -v 5
E1212 00:52:54.483086  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 start --wait true --alsologtostderr -v 5: (2m5.960339369s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 node delete m03 --alsologtostderr -v 5: (18.1774695s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 stop --alsologtostderr -v 5
E1212 00:55:28.131908  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:57:09.688492  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:57:54.483503  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 stop --alsologtostderr -v 5: (4m3.109282607s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5: exit status 7 (72.248079ms)

                                                
                                                
-- stdout --
	ha-313450
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-313450-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-313450-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:58:58.487072  216119 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:58:58.487209  216119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:58:58.487220  216119 out.go:374] Setting ErrFile to fd 2...
	I1212 00:58:58.487227  216119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:58:58.487478  216119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 00:58:58.487662  216119 out.go:368] Setting JSON to false
	I1212 00:58:58.487691  216119 mustload.go:66] Loading cluster: ha-313450
	I1212 00:58:58.487804  216119 notify.go:221] Checking for updates...
	I1212 00:58:58.488020  216119 config.go:182] Loaded profile config "ha-313450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:58:58.488034  216119 status.go:174] checking status of ha-313450 ...
	I1212 00:58:58.490350  216119 status.go:371] ha-313450 host status = "Stopped" (err=<nil>)
	I1212 00:58:58.490376  216119 status.go:384] host is not running, skipping remaining checks
	I1212 00:58:58.490383  216119 status.go:176] ha-313450 status: &{Name:ha-313450 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:58:58.490408  216119 status.go:174] checking status of ha-313450-m02 ...
	I1212 00:58:58.491731  216119 status.go:371] ha-313450-m02 host status = "Stopped" (err=<nil>)
	I1212 00:58:58.491748  216119 status.go:384] host is not running, skipping remaining checks
	I1212 00:58:58.491754  216119 status.go:176] ha-313450-m02 status: &{Name:ha-313450-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:58:58.491768  216119 status.go:174] checking status of ha-313450-m04 ...
	I1212 00:58:58.493005  216119 status.go:371] ha-313450-m04 host status = "Stopped" (err=<nil>)
	I1212 00:58:58.493025  216119 status.go:384] host is not running, skipping remaining checks
	I1212 00:58:58.493031  216119 status.go:176] ha-313450-m04 status: &{Name:ha-313450-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1212 01:00:28.128966  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m32.939213232s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 node add --control-plane --alsologtostderr -v 5
E1212 01:01:51.196963  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-313450 node add --control-plane --alsologtostderr -v 5: (1m21.264792186s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-313450 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-436683 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1212 01:02:09.687775  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:02:37.570232  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:02:54.482938  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-436683 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.541833787s)
--- PASS: TestJSONOutput/start/Command (89.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-436683 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-436683 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-436683 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-436683 --output=json --user=testUser: (8.367226619s)
--- PASS: TestJSONOutput/stop/Command (8.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-649915 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-649915 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (81.877322ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7b6d68b7-fdc4-41ee-8a66-659915ab951a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-649915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"539f9459-b938-4c81-9626-8d1f45d19dba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"8b63087c-24ae-45b0-b1c3-a41e56ad46bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9f0f264-5c1f-40ce-9f1f-07941e1927de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig"}}
	{"specversion":"1.0","id":"32728b0f-5668-42a6-bb47-a411a888d175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube"}}
	{"specversion":"1.0","id":"f246b582-56ce-457f-9838-3b11f7ace89a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b8b1e1b4-df26-49c0-a64b-fa62928d1400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b622edfa-bf9f-4e43-a315-d99533028b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-649915" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-649915
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (87.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-662005 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-662005 --driver=kvm2  --container-runtime=crio: (42.326602706s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-664634 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-664634 --driver=kvm2  --container-runtime=crio: (42.337573497s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-662005
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-664634
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-664634" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-664634
helpers_test.go:176: Cleaning up "first-662005" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-662005
--- PASS: TestMinikubeProfile (87.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-358913 --memory=3072 --mount-string /tmp/TestMountStartserial435565532/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 01:05:12.758780  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:05:28.131386  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-358913 --memory=3072 --mount-string /tmp/TestMountStartserial435565532/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.79343227s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-358913 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-358913 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-381465 --memory=3072 --mount-string /tmp/TestMountStartserial435565532/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-381465 --memory=3072 --mount-string /tmp/TestMountStartserial435565532/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.868495435s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381465 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381465 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-358913 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381465 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381465 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-381465
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-381465: (1.382384151s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-381465
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-381465: (20.405520086s)
--- PASS: TestMountStart/serial/RestartStopped (21.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381465 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381465 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074388 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 01:07:09.687781  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:07:54.483252  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074388 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m15.218017697s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-074388 -- rollout status deployment/busybox: (3.430336499s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-bz9qk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-z8xmx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-bz9qk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-z8xmx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-bz9qk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-z8xmx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-bz9qk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-bz9qk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-z8xmx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074388 -- exec busybox-7b57f96db7-z8xmx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-074388 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-074388 -v=5 --alsologtostderr: (46.24952731s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-074388 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp testdata/cp-test.txt multinode-074388:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1751888186/001/cp-test_multinode-074388.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388:/home/docker/cp-test.txt multinode-074388-m02:/home/docker/cp-test_multinode-074388_multinode-074388-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m02 "sudo cat /home/docker/cp-test_multinode-074388_multinode-074388-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388:/home/docker/cp-test.txt multinode-074388-m03:/home/docker/cp-test_multinode-074388_multinode-074388-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m03 "sudo cat /home/docker/cp-test_multinode-074388_multinode-074388-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp testdata/cp-test.txt multinode-074388-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1751888186/001/cp-test_multinode-074388-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388-m02:/home/docker/cp-test.txt multinode-074388:/home/docker/cp-test_multinode-074388-m02_multinode-074388.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388 "sudo cat /home/docker/cp-test_multinode-074388-m02_multinode-074388.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388-m02:/home/docker/cp-test.txt multinode-074388-m03:/home/docker/cp-test_multinode-074388-m02_multinode-074388-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m03 "sudo cat /home/docker/cp-test_multinode-074388-m02_multinode-074388-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp testdata/cp-test.txt multinode-074388-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1751888186/001/cp-test_multinode-074388-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388-m03:/home/docker/cp-test.txt multinode-074388:/home/docker/cp-test_multinode-074388-m03_multinode-074388.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388 "sudo cat /home/docker/cp-test_multinode-074388-m03_multinode-074388.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 cp multinode-074388-m03:/home/docker/cp-test.txt multinode-074388-m02:/home/docker/cp-test_multinode-074388-m03_multinode-074388-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 ssh -n multinode-074388-m02 "sudo cat /home/docker/cp-test_multinode-074388-m03_multinode-074388-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-074388 node stop m03: (1.848494312s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074388 status: exit status 7 (347.640111ms)

                                                
                                                
-- stdout --
	multinode-074388
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-074388-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-074388-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr: exit status 7 (350.671732ms)

                                                
                                                
-- stdout --
	multinode-074388
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-074388-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-074388-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:09:39.119838  221952 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:09:39.120107  221952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:09:39.120117  221952 out.go:374] Setting ErrFile to fd 2...
	I1212 01:09:39.120122  221952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:09:39.120336  221952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:09:39.120563  221952 out.go:368] Setting JSON to false
	I1212 01:09:39.120597  221952 mustload.go:66] Loading cluster: multinode-074388
	I1212 01:09:39.120844  221952 notify.go:221] Checking for updates...
	I1212 01:09:39.122277  221952 config.go:182] Loaded profile config "multinode-074388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:09:39.122308  221952 status.go:174] checking status of multinode-074388 ...
	I1212 01:09:39.124668  221952 status.go:371] multinode-074388 host status = "Running" (err=<nil>)
	I1212 01:09:39.124693  221952 host.go:66] Checking if "multinode-074388" exists ...
	I1212 01:09:39.127197  221952 main.go:143] libmachine: domain multinode-074388 has defined MAC address 52:54:00:7d:d5:a9 in network mk-multinode-074388
	I1212 01:09:39.127676  221952 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7d:d5:a9", ip: ""} in network mk-multinode-074388: {Iface:virbr1 ExpiryTime:2025-12-12 02:06:37 +0000 UTC Type:0 Mac:52:54:00:7d:d5:a9 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-074388 Clientid:01:52:54:00:7d:d5:a9}
	I1212 01:09:39.127704  221952 main.go:143] libmachine: domain multinode-074388 has defined IP address 192.168.39.185 and MAC address 52:54:00:7d:d5:a9 in network mk-multinode-074388
	I1212 01:09:39.127867  221952 host.go:66] Checking if "multinode-074388" exists ...
	I1212 01:09:39.128068  221952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:09:39.130246  221952 main.go:143] libmachine: domain multinode-074388 has defined MAC address 52:54:00:7d:d5:a9 in network mk-multinode-074388
	I1212 01:09:39.130653  221952 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7d:d5:a9", ip: ""} in network mk-multinode-074388: {Iface:virbr1 ExpiryTime:2025-12-12 02:06:37 +0000 UTC Type:0 Mac:52:54:00:7d:d5:a9 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-074388 Clientid:01:52:54:00:7d:d5:a9}
	I1212 01:09:39.130675  221952 main.go:143] libmachine: domain multinode-074388 has defined IP address 192.168.39.185 and MAC address 52:54:00:7d:d5:a9 in network mk-multinode-074388
	I1212 01:09:39.130804  221952 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/multinode-074388/id_rsa Username:docker}
	I1212 01:09:39.223161  221952 ssh_runner.go:195] Run: systemctl --version
	I1212 01:09:39.230979  221952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:39.251095  221952 kubeconfig.go:125] found "multinode-074388" server: "https://192.168.39.185:8443"
	I1212 01:09:39.251138  221952 api_server.go:166] Checking apiserver status ...
	I1212 01:09:39.251193  221952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:39.271558  221952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W1212 01:09:39.285148  221952 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:09:39.285221  221952 ssh_runner.go:195] Run: ls
	I1212 01:09:39.291337  221952 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1212 01:09:39.297702  221952 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1212 01:09:39.297734  221952 status.go:463] multinode-074388 apiserver status = Running (err=<nil>)
	I1212 01:09:39.297744  221952 status.go:176] multinode-074388 status: &{Name:multinode-074388 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:09:39.297761  221952 status.go:174] checking status of multinode-074388-m02 ...
	I1212 01:09:39.299575  221952 status.go:371] multinode-074388-m02 host status = "Running" (err=<nil>)
	I1212 01:09:39.299598  221952 host.go:66] Checking if "multinode-074388-m02" exists ...
	I1212 01:09:39.302356  221952 main.go:143] libmachine: domain multinode-074388-m02 has defined MAC address 52:54:00:dc:17:e7 in network mk-multinode-074388
	I1212 01:09:39.302788  221952 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:17:e7", ip: ""} in network mk-multinode-074388: {Iface:virbr1 ExpiryTime:2025-12-12 02:08:07 +0000 UTC Type:0 Mac:52:54:00:dc:17:e7 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-074388-m02 Clientid:01:52:54:00:dc:17:e7}
	I1212 01:09:39.302811  221952 main.go:143] libmachine: domain multinode-074388-m02 has defined IP address 192.168.39.168 and MAC address 52:54:00:dc:17:e7 in network mk-multinode-074388
	I1212 01:09:39.302982  221952 host.go:66] Checking if "multinode-074388-m02" exists ...
	I1212 01:09:39.303195  221952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:09:39.305300  221952 main.go:143] libmachine: domain multinode-074388-m02 has defined MAC address 52:54:00:dc:17:e7 in network mk-multinode-074388
	I1212 01:09:39.305653  221952 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:17:e7", ip: ""} in network mk-multinode-074388: {Iface:virbr1 ExpiryTime:2025-12-12 02:08:07 +0000 UTC Type:0 Mac:52:54:00:dc:17:e7 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-074388-m02 Clientid:01:52:54:00:dc:17:e7}
	I1212 01:09:39.305674  221952 main.go:143] libmachine: domain multinode-074388-m02 has defined IP address 192.168.39.168 and MAC address 52:54:00:dc:17:e7 in network mk-multinode-074388
	I1212 01:09:39.305810  221952 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22101-186349/.minikube/machines/multinode-074388-m02/id_rsa Username:docker}
	I1212 01:09:39.386150  221952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:39.404689  221952 status.go:176] multinode-074388-m02 status: &{Name:multinode-074388-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:09:39.404733  221952 status.go:174] checking status of multinode-074388-m03 ...
	I1212 01:09:39.406745  221952 status.go:371] multinode-074388-m03 host status = "Stopped" (err=<nil>)
	I1212 01:09:39.406777  221952 status.go:384] host is not running, skipping remaining checks
	I1212 01:09:39.406788  221952 status.go:176] multinode-074388-m03 status: &{Name:multinode-074388-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-074388 node start m03 -v=5 --alsologtostderr: (44.097813536s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (44.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (341.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-074388
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-074388
E1212 01:10:28.130619  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:12:09.687676  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:12:54.487245  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-074388: (3m0.023667407s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074388 --wait=true -v=5 --alsologtostderr
E1212 01:15:28.130649  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074388 --wait=true -v=5 --alsologtostderr: (2m41.068582471s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-074388
--- PASS: TestMultiNode/serial/RestartKeepsNodes (341.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-074388 node delete m03: (2.292718052s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (166.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 stop
E1212 01:17:09.687588  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:17:54.487484  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:18:31.199736  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-074388 stop: (2m46.213268855s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074388 status: exit status 7 (70.213259ms)

                                                
                                                
-- stdout --
	multinode-074388
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-074388-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr: exit status 7 (69.52057ms)

                                                
                                                
-- stdout --
	multinode-074388
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-074388-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:18:54.453932  224960 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:18:54.454039  224960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:18:54.454044  224960 out.go:374] Setting ErrFile to fd 2...
	I1212 01:18:54.454048  224960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:18:54.454296  224960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:18:54.454495  224960 out.go:368] Setting JSON to false
	I1212 01:18:54.454526  224960 mustload.go:66] Loading cluster: multinode-074388
	I1212 01:18:54.454616  224960 notify.go:221] Checking for updates...
	I1212 01:18:54.455058  224960 config.go:182] Loaded profile config "multinode-074388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:18:54.455085  224960 status.go:174] checking status of multinode-074388 ...
	I1212 01:18:54.457318  224960 status.go:371] multinode-074388 host status = "Stopped" (err=<nil>)
	I1212 01:18:54.457336  224960 status.go:384] host is not running, skipping remaining checks
	I1212 01:18:54.457341  224960 status.go:176] multinode-074388 status: &{Name:multinode-074388 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:18:54.457364  224960 status.go:174] checking status of multinode-074388-m02 ...
	I1212 01:18:54.458802  224960 status.go:371] multinode-074388-m02 host status = "Stopped" (err=<nil>)
	I1212 01:18:54.458818  224960 status.go:384] host is not running, skipping remaining checks
	I1212 01:18:54.458824  224960 status.go:176] multinode-074388-m02 status: &{Name:multinode-074388-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (166.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (119.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074388 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 01:19:17.571765  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:20:28.129735  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074388 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.298151281s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074388 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (119.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-074388
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074388-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-074388-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.970289ms)

                                                
                                                
-- stdout --
	* [multinode-074388-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-074388-m02' is duplicated with machine name 'multinode-074388-m02' in profile 'multinode-074388'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074388-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074388-m03 --driver=kvm2  --container-runtime=crio: (43.490482247s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-074388
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-074388: exit status 80 (215.468857ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-074388 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-074388-m03 already exists in multinode-074388-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-074388-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.72s)

                                                
                                    
x
+
TestScheduledStopUnix (112.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-415529 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-415529 --memory=3072 --driver=kvm2  --container-runtime=crio: (40.911552388s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-415529 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:25:02.312056  227443 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:02.312306  227443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:02.312318  227443 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:02.312325  227443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:02.312596  227443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:25:02.312905  227443 out.go:368] Setting JSON to false
	I1212 01:25:02.313013  227443 mustload.go:66] Loading cluster: scheduled-stop-415529
	I1212 01:25:02.313363  227443 config.go:182] Loaded profile config "scheduled-stop-415529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:25:02.313437  227443 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/config.json ...
	I1212 01:25:02.313671  227443 mustload.go:66] Loading cluster: scheduled-stop-415529
	I1212 01:25:02.313809  227443 config.go:182] Loaded profile config "scheduled-stop-415529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-415529 -n scheduled-stop-415529
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-415529 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:25:02.642011  227504 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:02.642135  227504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:02.642143  227504 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:02.642147  227504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:02.642383  227504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:25:02.642652  227504 out.go:368] Setting JSON to false
	I1212 01:25:02.642873  227504 daemonize_unix.go:73] killing process 227476 as it is an old scheduled stop
	I1212 01:25:02.642992  227504 mustload.go:66] Loading cluster: scheduled-stop-415529
	I1212 01:25:02.643503  227504 config.go:182] Loaded profile config "scheduled-stop-415529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:25:02.643601  227504 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/config.json ...
	I1212 01:25:02.643812  227504 mustload.go:66] Loading cluster: scheduled-stop-415529
	I1212 01:25:02.643939  227504 config.go:182] Loaded profile config "scheduled-stop-415529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 01:25:02.648619  190272 retry.go:31] will retry after 115.006µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.649813  190272 retry.go:31] will retry after 84.25µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.651018  190272 retry.go:31] will retry after 151.003µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.652177  190272 retry.go:31] will retry after 385.301µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.653364  190272 retry.go:31] will retry after 745.268µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.654566  190272 retry.go:31] will retry after 433.149µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.655724  190272 retry.go:31] will retry after 708.18µs: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.656884  190272 retry.go:31] will retry after 1.492407ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.659130  190272 retry.go:31] will retry after 2.321727ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.662469  190272 retry.go:31] will retry after 2.603347ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.665784  190272 retry.go:31] will retry after 4.609469ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.671111  190272 retry.go:31] will retry after 12.846325ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.684400  190272 retry.go:31] will retry after 9.504903ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.694736  190272 retry.go:31] will retry after 15.089246ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.709994  190272 retry.go:31] will retry after 22.686227ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.733337  190272 retry.go:31] will retry after 32.276438ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
I1212 01:25:02.766699  190272 retry.go:31] will retry after 36.623138ms: open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-415529 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-415529 -n scheduled-stop-415529
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-415529
E1212 01:25:28.129550  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-415529 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:25:28.415620  227662 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:28.415923  227662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:28.415934  227662 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:28.415941  227662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:28.416202  227662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:25:28.416486  227662 out.go:368] Setting JSON to false
	I1212 01:25:28.416590  227662 mustload.go:66] Loading cluster: scheduled-stop-415529
	I1212 01:25:28.416936  227662 config.go:182] Loaded profile config "scheduled-stop-415529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:25:28.417027  227662 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/scheduled-stop-415529/config.json ...
	I1212 01:25:28.417244  227662 mustload.go:66] Loading cluster: scheduled-stop-415529
	I1212 01:25:28.417366  227662 config.go:182] Loaded profile config "scheduled-stop-415529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-415529
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-415529: exit status 7 (65.946167ms)

                                                
                                                
-- stdout --
	scheduled-stop-415529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-415529 -n scheduled-stop-415529
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-415529 -n scheduled-stop-415529: exit status 7 (65.301402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-415529" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-415529
--- PASS: TestScheduledStopUnix (112.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (401.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.386222977 start -p running-upgrade-620017 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1212 01:27:09.687712  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.386222977 start -p running-upgrade-620017 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m23.646849043s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-620017 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-620017 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m16.217818941s)
helpers_test.go:176: Cleaning up "running-upgrade-620017" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-620017
--- PASS: TestRunningBinaryUpgrade (401.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.437874973s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-686449
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-686449: (2.605493829s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-686449 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-686449 status --format={{.Host}}: exit status 7 (95.564545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.679920832s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-686449 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.504056ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-686449] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-686449
	    minikube start -p kubernetes-upgrade-686449 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6864492 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-686449 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-686449 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.282464802s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-686449" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-686449
--- PASS: TestKubernetesUpgrade (179.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (167.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3945508548 start -p stopped-upgrade-788791 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3945508548 start -p stopped-upgrade-788791 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m46.480431403s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3945508548 -p stopped-upgrade-788791 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3945508548 -p stopped-upgrade-788791 stop: (1.972068866s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-788791 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-788791 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.50024385s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (167.95s)

                                                
                                    
x
+
TestISOImage/Setup (47.68s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-561224 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 01:27:54.483896  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-561224 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.676418986s)
--- PASS: TestISOImage/Setup (47.68s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-788791
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-788791: (1.599131527s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.60s)

                                                
                                    
x
+
TestPause/serial/Start (95.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-321955 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-321955 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m35.277119797s)
--- PASS: TestPause/serial/Start (95.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-985362 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-985362 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (112.011548ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-985362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (64.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-985362 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-985362 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.478657828s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-985362 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (64.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-028084 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-028084 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (134.890886ms)

                                                
                                                
-- stdout --
	* [false-028084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:30:25.840172  231736 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:30:25.840333  231736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:30:25.840343  231736 out.go:374] Setting ErrFile to fd 2...
	I1212 01:30:25.840350  231736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:30:25.840648  231736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-186349/.minikube/bin
	I1212 01:30:25.841266  231736 out.go:368] Setting JSON to false
	I1212 01:30:25.842729  231736 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25970,"bootTime":1765477056,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:30:25.842835  231736 start.go:143] virtualization: kvm guest
	I1212 01:30:25.845195  231736 out.go:179] * [false-028084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 01:30:25.846952  231736 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:30:25.847111  231736 notify.go:221] Checking for updates...
	I1212 01:30:25.849441  231736 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:30:25.850942  231736 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-186349/kubeconfig
	I1212 01:30:25.852398  231736 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-186349/.minikube
	I1212 01:30:25.853865  231736 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:30:25.855288  231736 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:30:25.857151  231736 config.go:182] Loaded profile config "NoKubernetes-985362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:30:25.857311  231736 config.go:182] Loaded profile config "guest-561224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 01:30:25.857510  231736 config.go:182] Loaded profile config "pause-321955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:30:25.857654  231736 config.go:182] Loaded profile config "running-upgrade-620017": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1212 01:30:25.857805  231736 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:30:25.901032  231736 out.go:179] * Using the kvm2 driver based on user configuration
	I1212 01:30:25.902625  231736 start.go:309] selected driver: kvm2
	I1212 01:30:25.902650  231736 start.go:927] validating driver "kvm2" against <nil>
	I1212 01:30:25.902668  231736 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:30:25.904885  231736 out.go:203] 
	W1212 01:30:25.906144  231736 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 01:30:25.907233  231736 out.go:203] 

                                                
                                                
** /stderr **
E1212 01:30:28.128996  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-028084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-028084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:29:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.238:8443
name: pause-321955
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:28:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.122:8443
name: running-upgrade-620017
contexts:
- context:
cluster: pause-321955
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:29:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-321955
name: pause-321955
- context:
cluster: running-upgrade-620017
user: running-upgrade-620017
name: running-upgrade-620017
current-context: ""
kind: Config
users:
- name: pause-321955
user:
client-certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.crt
client-key: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.key
- name: running-upgrade-620017
user:
client-certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/running-upgrade-620017/client.crt
client-key: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/running-upgrade-620017/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-028084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-028084"

                                                
                                                
----------------------- debugLogs end: false-028084 [took: 3.701328573s] --------------------------------
helpers_test.go:176: Cleaning up "false-028084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-028084
--- PASS: TestNetworkPlugins/group/false (4.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.912714992s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-985362 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-985362 status -o json: exit status 2 (247.120698ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-985362","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-985362
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-985362 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (23.776543499s)
--- PASS: TestNoKubernetes/serial/Start (23.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22101-186349/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-985362 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-985362 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.10201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-985362
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-985362: (1.457586957s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-985362 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-985362 --driver=kvm2  --container-runtime=crio: (20.969000626s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-985362 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-985362 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.739238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-877369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1212 01:32:09.687598  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-877369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m54.290951783s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-264631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1212 01:32:54.483993  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-264631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m14.023846628s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-201146 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-201146 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m35.647501337s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-877369 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6ce78698-6d6d-4ff8-8524-7a686ff50146] Pending
helpers_test.go:353: "busybox" [6ce78698-6d6d-4ff8-8524-7a686ff50146] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6ce78698-6d6d-4ff8-8524-7a686ff50146] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003779223s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-877369 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-264631 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7132e25d-783e-49f2-8c7f-9af96a406034] Pending
helpers_test.go:353: "busybox" [7132e25d-783e-49f2-8c7f-9af96a406034] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7132e25d-783e-49f2-8c7f-9af96a406034] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006229177s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-264631 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-877369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-877369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139206728s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-877369 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (83.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-877369 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-877369 --alsologtostderr -v=3: (1m23.200343574s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (83.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-264631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-264631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.123794378s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-264631 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (76.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-264631 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-264631 --alsologtostderr -v=3: (1m16.698380349s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (76.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-201146 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1e55030a-d3c6-422c-8ebc-0808b0a5af0a] Pending
helpers_test.go:353: "busybox" [1e55030a-d3c6-422c-8ebc-0808b0a5af0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1e55030a-d3c6-422c-8ebc-0808b0a5af0a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004807158s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-201146 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-201146 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-201146 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015811553s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-201146 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (84.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-201146 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-201146 --alsologtostderr -v=3: (1m24.248515007s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (84.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-513133 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1212 01:35:11.201426  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:35:28.130967  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-513133 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m22.160429353s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-264631 -n no-preload-264631
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-264631 -n no-preload-264631: exit status 7 (78.832786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-264631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-264631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-264631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (57.763768658s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-264631 -n no-preload-264631
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-877369 -n old-k8s-version-877369
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-877369 -n old-k8s-version-877369: exit status 7 (69.050422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-877369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (63.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-877369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1212 01:35:57.573854  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-877369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.818125951s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-877369 -n old-k8s-version-877369
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (63.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-201146 -n embed-certs-201146
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-201146 -n embed-certs-201146: exit status 7 (87.032077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-201146 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-201146 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-201146 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (53.079345018s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-201146 -n embed-certs-201146
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-513133 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b1575292-5094-48fa-be86-bc08f8778925] Pending
helpers_test.go:353: "busybox" [b1575292-5094-48fa-be86-bc08f8778925] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b1575292-5094-48fa-be86-bc08f8778925] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.011503595s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-513133 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-513133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-513133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.41842068s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-513133 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jk2rg" [53221e2e-cff4-43a2-b061-999b1e0b948f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jk2rg" [53221e2e-cff4-43a2-b061-999b1e0b948f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005865811s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (85.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-513133 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-513133 --alsologtostderr -v=3: (1m25.220326335s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (85.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-29bvb" [c3f68049-6a97-415b-a8c8-30058cc172cc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-29bvb" [c3f68049-6a97-415b-a8c8-30058cc172cc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.073780828s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jk2rg" [53221e2e-cff4-43a2-b061-999b1e0b948f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004708586s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-264631 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-264631 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-264631 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-264631 -n no-preload-264631
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-264631 -n no-preload-264631: exit status 2 (264.501445ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-264631 -n no-preload-264631
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-264631 -n no-preload-264631: exit status 2 (264.308605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-264631 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-264631 -n no-preload-264631
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-264631 -n no-preload-264631
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-406228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-406228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (49.06571609s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-29bvb" [c3f68049-6a97-415b-a8c8-30058cc172cc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005668785s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-877369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-877369 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-877369 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-877369 --alsologtostderr -v=1: (1.031787416s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-877369 -n old-k8s-version-877369
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-877369 -n old-k8s-version-877369: exit status 2 (262.397643ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-877369 -n old-k8s-version-877369
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-877369 -n old-k8s-version-877369: exit status 2 (240.647579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-877369 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-877369 -n old-k8s-version-877369
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-877369 -n old-k8s-version-877369
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9txds" [51bffcd3-196b-478c-8b28-aadbdd2968bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9txds" [51bffcd3-196b-478c-8b28-aadbdd2968bc] Running
E1212 01:37:09.687796  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005919126s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m39.117665728s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9txds" [51bffcd3-196b-478c-8b28-aadbdd2968bc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005345607s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-201146 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-201146 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-201146 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-201146 --alsologtostderr -v=1: (1.029790599s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-201146 -n embed-certs-201146
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-201146 -n embed-certs-201146: exit status 2 (275.151703ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-201146 -n embed-certs-201146
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-201146 -n embed-certs-201146: exit status 2 (262.734774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-201146 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-201146 --alsologtostderr -v=1: (1.132880685s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-201146 -n embed-certs-201146
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-201146 -n embed-certs-201146
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m17.662153112s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-406228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-406228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.344249089s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (88.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-406228 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-406228 --alsologtostderr -v=3: (1m28.265343875s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (88.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133: exit status 7 (70.682891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-513133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-513133 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1212 01:37:54.483322  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:38:32.764119  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-843156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-513133 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (53.845613661s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-rvvqj" [31bf5386-14fa-450a-bc44-1de744cea5c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004535292s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-028084 "pgrep -a kubelet"
I1212 01:38:41.671532  190272 config.go:182] Loaded profile config "auto-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-028084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pfbx9" [ab615b85-3d3c-480a-9b48-f59a7ac07e48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pfbx9" [ab615b85-3d3c-480a-9b48-f59a7ac07e48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00612067s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-028084 "pgrep -a kubelet"
I1212 01:38:47.360512  190272 config.go:182] Loaded profile config "kindnet-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-028084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2hjb7" [198d0597-e1ef-4896-bf58-b9766aff0b88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2hjb7" [198d0597-e1ef-4896-bf58-b9766aff0b88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00552217s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-trnvg" [c137fd13-24e6-4074-a424-c8617089938a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-trnvg" [c137fd13-24e6-4074-a424-c8617089938a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004095883s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-028084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-028084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-trnvg" [c137fd13-24e6-4074-a424-c8617089938a] Running
E1212 01:39:00.465375  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:00.787521  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:01.429686  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.711987  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.760522  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.767050  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.778841  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.800426  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.842103  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:02.923755  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:03.085988  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:03.407339  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:04.049362  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:05.273957  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:05.331104  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005045753s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-513133 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-513133 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-513133 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-513133 --alsologtostderr -v=1: (1.077391358s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133: exit status 2 (272.719069ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133: exit status 2 (260.449958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-513133 --alsologtostderr -v=1
E1212 01:39:07.893220  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-513133 -n default-k8s-diff-port-513133
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.901370394s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-406228 -n newest-cni-406228
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-406228 -n newest-cni-406228: exit status 7 (94.054831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-406228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (56.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-406228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-406228 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (56.529045555s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-406228 -n newest-cni-406228
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (56.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (113.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1212 01:39:13.015764  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m53.322180307s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (113.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (146.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1212 01:39:20.637660  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:23.257530  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:41.119930  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:39:43.738952  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m26.690389195s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (146.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-406228 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-406228 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-406228 --alsologtostderr -v=1: (1.492288347s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-406228 -n newest-cni-406228
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-406228 -n newest-cni-406228: exit status 2 (363.894832ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-406228 -n newest-cni-406228
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-406228 -n newest-cni-406228: exit status 2 (361.278654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-406228 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-406228 --alsologtostderr -v=1: (1.547804536s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-406228 -n newest-cni-406228
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-406228 -n newest-cni-406228
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1212 01:40:22.081888  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:40:24.700511  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:40:28.129534  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/functional-582645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m37.418977862s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-9z6hd" [1eeaaf9a-cfa7-4853-a21e-3fa50da1a881] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.017408908s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-028084 "pgrep -a kubelet"
I1212 01:40:45.381140  190272 config.go:182] Loaded profile config "calico-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-028084 replace --force -f testdata/netcat-deployment.yaml
I1212 01:40:45.738609  190272 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5pv92" [1dea0f1d-c8f5-4bc4-8740-72d3c9a6249c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5pv92" [1dea0f1d-c8f5-4bc4-8740-72d3c9a6249c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.007029443s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-028084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-028084 "pgrep -a kubelet"
I1212 01:41:04.252136  190272 config.go:182] Loaded profile config "custom-flannel-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-028084 replace --force -f testdata/netcat-deployment.yaml
I1212 01:41:04.695508  190272 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7vkdd" [0f0d746a-d46d-43dc-a126-00fbe07da408] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7vkdd" [0f0d746a-d46d-43dc-a126-00fbe07da408] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004526748s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1212 01:41:17.111261  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:17.117877  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:17.129485  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:17.151046  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:17.192590  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:17.274173  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:17.435801  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-028084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m34.188845399s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-028084 exec deployment/netcat -- nslookup kubernetes.default
E1212 01:41:17.757933  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
E1212 01:41:37.607788  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.2s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 28bc9824e3c85d2e3519912c2810d5729ab9ce8c
iso_test.go:118:   iso_version: v1.37.0-1765481609-22101
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.20s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.2s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-561224 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-028084 "pgrep -a kubelet"
I1212 01:41:41.513046  190272 config.go:182] Loaded profile config "enable-default-cni-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-028084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2xwbd" [db944543-b686-4627-bf54-2cee17d27ffc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:41:44.003914  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/old-k8s-version-877369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:41:46.622395  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/no-preload-264631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-2xwbd" [db944543-b686-4627-bf54-2cee17d27ffc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008045251s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-xfqzk" [5d0c5c56-635a-440f-8168-5d2d727fd7b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006786294s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-028084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-028084 "pgrep -a kubelet"
I1212 01:41:57.166361  190272 config.go:182] Loaded profile config "flannel-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-028084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-bhp48" [90daf5fd-ea8f-4637-9fa5-9b532c2b0574] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:41:58.090164  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/default-k8s-diff-port-513133/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-bhp48" [90daf5fd-ea8f-4637-9fa5-9b532c2b0574] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003884361s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-028084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-028084 "pgrep -a kubelet"
I1212 01:42:50.398976  190272 config.go:182] Loaded profile config "bridge-028084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-028084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-prjnf" [eede2075-ad3b-45e1-94ba-c1ea8b30eca3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:42:54.482849  190272 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/addons-081397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-prjnf" [eede2075-ad3b-45e1-94ba-c1ea8b30eca3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005318635s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-028084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-028084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.39
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
375 TestStartStop/group/disable-driver-mounts 0.18
385 TestNetworkPlugins/group/kubenet 5.05
393 TestNetworkPlugins/group/cilium 4.56
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-081397 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-824142" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-824142
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-028084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-028084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:29:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.238:8443
name: pause-321955
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:28:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.122:8443
name: running-upgrade-620017
contexts:
- context:
cluster: pause-321955
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:29:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-321955
name: pause-321955
- context:
cluster: running-upgrade-620017
user: running-upgrade-620017
name: running-upgrade-620017
current-context: ""
kind: Config
users:
- name: pause-321955
user:
client-certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.crt
client-key: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.key
- name: running-upgrade-620017
user:
client-certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/running-upgrade-620017/client.crt
client-key: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/running-upgrade-620017/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-028084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-028084"

                                                
                                                
----------------------- debugLogs end: kubenet-028084 [took: 4.848724491s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-028084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-028084
--- SKIP: TestNetworkPlugins/group/kubenet (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-028084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-028084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:29:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.238:8443
name: pause-321955
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-186349/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:28:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.122:8443
name: running-upgrade-620017
contexts:
- context:
cluster: pause-321955
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:29:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-321955
name: pause-321955
- context:
cluster: running-upgrade-620017
user: running-upgrade-620017
name: running-upgrade-620017
current-context: ""
kind: Config
users:
- name: pause-321955
user:
client-certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.crt
client-key: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/pause-321955/client.key
- name: running-upgrade-620017
user:
client-certificate: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/running-upgrade-620017/client.crt
client-key: /home/jenkins/minikube-integration/22101-186349/.minikube/profiles/running-upgrade-620017/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-028084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-028084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-028084"

                                                
                                                
----------------------- debugLogs end: cilium-028084 [took: 4.364274316s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-028084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-028084
--- SKIP: TestNetworkPlugins/group/cilium (4.56s)

                                                
                                    
Copied to clipboard